00:00:00.001 Started by upstream project "autotest-per-patch" build number 126105 00:00:00.001 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.094 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.095 The recommended git tool is: git 00:00:00.095 using credential 00000000-0000-0000-0000-000000000002 00:00:00.097 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.136 Fetching changes from the remote Git repository 00:00:00.138 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.173 Using shallow fetch with depth 1 00:00:00.173 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.173 > git --version # timeout=10 00:00:00.206 > git --version # 'git version 2.39.2' 00:00:00.206 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.228 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.228 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.721 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.735 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.746 Checking out Revision 308e970df89ed396a3f9dcf22fba8891259694e4 (FETCH_HEAD) 00:00:04.746 > git config core.sparsecheckout # timeout=10 00:00:04.757 > git read-tree -mu HEAD # timeout=10 00:00:04.774 > git checkout -f 308e970df89ed396a3f9dcf22fba8891259694e4 # timeout=5 00:00:04.794 Commit message: "jjb/create-perf-report: make job run concurrent" 00:00:04.794 > git rev-list --no-walk a4dfdc44df8d07f755780ce4c74effabd30d33d0 # timeout=10 00:00:04.909 [Pipeline] Start of Pipeline 00:00:04.922 [Pipeline] library 00:00:04.923 Loading library shm_lib@master 00:00:04.923 Library shm_lib@master is cached. Copying from home. 00:00:04.939 [Pipeline] node 00:00:04.964 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.966 [Pipeline] { 00:00:04.976 [Pipeline] catchError 00:00:04.978 [Pipeline] { 00:00:04.989 [Pipeline] wrap 00:00:04.996 [Pipeline] { 00:00:05.001 [Pipeline] stage 00:00:05.002 [Pipeline] { (Prologue) 00:00:05.175 [Pipeline] sh 00:00:05.494 + logger -p user.info -t JENKINS-CI 00:00:05.513 [Pipeline] echo 00:00:05.514 Node: GP11 00:00:05.522 [Pipeline] sh 00:00:05.823 [Pipeline] setCustomBuildProperty 00:00:05.835 [Pipeline] echo 00:00:05.836 Cleanup processes 00:00:05.842 [Pipeline] sh 00:00:06.125 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.125 362814 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.139 [Pipeline] sh 00:00:06.425 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.432 ++ grep -v 'sudo pgrep' 00:00:06.432 ++ awk '{print $1}' 00:00:06.432 + sudo kill -9 00:00:06.432 + true 00:00:06.446 [Pipeline] cleanWs 00:00:06.453 [WS-CLEANUP] Deleting project workspace... 00:00:06.453 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.459 [WS-CLEANUP] done 00:00:06.462 [Pipeline] setCustomBuildProperty 00:00:06.472 [Pipeline] sh 00:00:06.748 + sudo git config --global --replace-all safe.directory '*' 00:00:06.805 [Pipeline] httpRequest 00:00:06.836 [Pipeline] echo 00:00:06.838 Sorcerer 10.211.164.101 is alive 00:00:06.844 [Pipeline] httpRequest 00:00:06.848 HttpMethod: GET 00:00:06.849 URL: http://10.211.164.101/packages/jbp_308e970df89ed396a3f9dcf22fba8891259694e4.tar.gz 00:00:06.850 Sending request to url: http://10.211.164.101/packages/jbp_308e970df89ed396a3f9dcf22fba8891259694e4.tar.gz 00:00:06.872 Response Code: HTTP/1.1 200 OK 00:00:06.872 Success: Status code 200 is in the accepted range: 200,404 00:00:06.873 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_308e970df89ed396a3f9dcf22fba8891259694e4.tar.gz 00:00:10.377 [Pipeline] sh 00:00:10.667 + tar --no-same-owner -xf jbp_308e970df89ed396a3f9dcf22fba8891259694e4.tar.gz 00:00:10.688 [Pipeline] httpRequest 00:00:10.725 [Pipeline] echo 00:00:10.727 Sorcerer 10.211.164.101 is alive 00:00:10.738 [Pipeline] httpRequest 00:00:10.743 HttpMethod: GET 00:00:10.744 URL: http://10.211.164.101/packages/spdk_a7a09b9a0c5c77a9c5b1cbd027b0139c4b9af87a.tar.gz 00:00:10.745 Sending request to url: http://10.211.164.101/packages/spdk_a7a09b9a0c5c77a9c5b1cbd027b0139c4b9af87a.tar.gz 00:00:10.749 Response Code: HTTP/1.1 200 OK 00:00:10.749 Success: Status code 200 is in the accepted range: 200,404 00:00:10.750 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_a7a09b9a0c5c77a9c5b1cbd027b0139c4b9af87a.tar.gz 00:00:28.031 [Pipeline] sh 00:00:28.315 + tar --no-same-owner -xf spdk_a7a09b9a0c5c77a9c5b1cbd027b0139c4b9af87a.tar.gz 00:00:31.611 [Pipeline] sh 00:00:31.899 + git -C spdk log --oneline -n5 00:00:31.899 a7a09b9a0 lib/blob: add new trace BLOB_PROCESS_START/COMPLETE 00:00:31.899 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:00:31.899 6c7c1f57e accel: add sequence outstanding stat 00:00:31.899 3bc8e6a26 accel: add utility to put task 00:00:31.899 2dba73997 accel: move get task utility 00:00:31.915 [Pipeline] } 00:00:31.938 [Pipeline] // stage 00:00:31.950 [Pipeline] stage 00:00:31.953 [Pipeline] { (Prepare) 00:00:31.978 [Pipeline] writeFile 00:00:32.001 [Pipeline] sh 00:00:32.288 + logger -p user.info -t JENKINS-CI 00:00:32.301 [Pipeline] sh 00:00:32.587 + logger -p user.info -t JENKINS-CI 00:00:32.598 [Pipeline] sh 00:00:32.879 + cat autorun-spdk.conf 00:00:32.879 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.879 SPDK_TEST_NVMF=1 00:00:32.879 SPDK_TEST_NVME_CLI=1 00:00:32.879 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:32.879 SPDK_TEST_NVMF_NICS=e810 00:00:32.879 SPDK_TEST_VFIOUSER=1 00:00:32.879 SPDK_RUN_UBSAN=1 00:00:32.879 NET_TYPE=phy 00:00:32.887 RUN_NIGHTLY=0 00:00:32.891 [Pipeline] readFile 00:00:32.917 [Pipeline] withEnv 00:00:32.919 [Pipeline] { 00:00:32.933 [Pipeline] sh 00:00:33.217 + set -ex 00:00:33.217 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:33.217 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:33.217 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:33.217 ++ SPDK_TEST_NVMF=1 00:00:33.217 ++ SPDK_TEST_NVME_CLI=1 00:00:33.217 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:33.217 ++ SPDK_TEST_NVMF_NICS=e810 00:00:33.217 ++ SPDK_TEST_VFIOUSER=1 00:00:33.217 ++ SPDK_RUN_UBSAN=1 00:00:33.217 ++ NET_TYPE=phy 00:00:33.217 ++ RUN_NIGHTLY=0 00:00:33.217 + case $SPDK_TEST_NVMF_NICS in 00:00:33.217 + DRIVERS=ice 00:00:33.217 + [[ tcp == \r\d\m\a ]] 00:00:33.217 + [[ -n ice ]] 00:00:33.217 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:33.217 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:33.217 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:33.217 rmmod: ERROR: Module irdma is not currently loaded 00:00:33.217 rmmod: ERROR: Module i40iw is not currently loaded 00:00:33.217 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:33.217 + true 00:00:33.217 + for D in $DRIVERS 00:00:33.217 + sudo modprobe ice 00:00:33.217 + exit 0 00:00:33.226 [Pipeline] } 00:00:33.244 [Pipeline] // withEnv 00:00:33.249 [Pipeline] } 00:00:33.266 [Pipeline] // stage 00:00:33.275 [Pipeline] catchError 00:00:33.277 [Pipeline] { 00:00:33.292 [Pipeline] timeout 00:00:33.292 Timeout set to expire in 50 min 00:00:33.294 [Pipeline] { 00:00:33.309 [Pipeline] stage 00:00:33.311 [Pipeline] { (Tests) 00:00:33.327 [Pipeline] sh 00:00:33.610 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:33.610 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:33.610 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:33.610 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:33.610 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:33.610 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:33.610 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:33.610 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:33.610 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:33.610 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:33.610 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:33.610 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:33.610 + source /etc/os-release 00:00:33.610 ++ NAME='Fedora Linux' 00:00:33.610 ++ VERSION='38 (Cloud Edition)' 00:00:33.610 ++ ID=fedora 00:00:33.610 ++ VERSION_ID=38 00:00:33.610 ++ VERSION_CODENAME= 00:00:33.610 ++ PLATFORM_ID=platform:f38 00:00:33.610 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:33.610 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:33.610 ++ LOGO=fedora-logo-icon 00:00:33.610 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:33.610 ++ HOME_URL=https://fedoraproject.org/ 00:00:33.610 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:33.610 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:33.610 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:33.610 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:33.610 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:33.610 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:33.610 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:33.610 ++ SUPPORT_END=2024-05-14 00:00:33.610 ++ VARIANT='Cloud Edition' 00:00:33.610 ++ VARIANT_ID=cloud 00:00:33.610 + uname -a 00:00:33.610 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:33.610 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:34.547 Hugepages 00:00:34.547 node hugesize free / total 00:00:34.547 node0 1048576kB 0 / 0 00:00:34.547 node0 2048kB 0 / 0 00:00:34.547 node1 1048576kB 0 / 0 00:00:34.547 node1 2048kB 0 / 0 00:00:34.547 00:00:34.547 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:34.547 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:34.547 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:34.547 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:34.547 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:34.805 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:34.805 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:34.805 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:34.805 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:34.805 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:34.805 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:34.805 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:34.805 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:34.805 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:34.805 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:34.805 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:34.805 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:34.805 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:34.805 + rm -f /tmp/spdk-ld-path 00:00:34.805 + source autorun-spdk.conf 00:00:34.805 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:34.805 ++ SPDK_TEST_NVMF=1 00:00:34.805 ++ SPDK_TEST_NVME_CLI=1 00:00:34.805 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:34.805 ++ SPDK_TEST_NVMF_NICS=e810 00:00:34.805 ++ SPDK_TEST_VFIOUSER=1 00:00:34.805 ++ SPDK_RUN_UBSAN=1 00:00:34.805 ++ NET_TYPE=phy 00:00:34.805 ++ RUN_NIGHTLY=0 00:00:34.805 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:34.805 + [[ -n '' ]] 00:00:34.805 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:34.805 + for M in /var/spdk/build-*-manifest.txt 00:00:34.805 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:34.805 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:34.805 + for M in /var/spdk/build-*-manifest.txt 00:00:34.805 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:34.805 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:34.805 ++ uname 00:00:34.805 + [[ Linux == \L\i\n\u\x ]] 00:00:34.805 + sudo dmesg -T 00:00:34.805 + sudo dmesg --clear 00:00:34.805 + dmesg_pid=363487 00:00:34.805 + [[ Fedora Linux == FreeBSD ]] 00:00:34.805 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:34.805 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:34.805 + sudo dmesg -Tw 00:00:34.805 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:34.805 + [[ -x /usr/src/fio-static/fio ]] 00:00:34.805 + export FIO_BIN=/usr/src/fio-static/fio 00:00:34.805 + FIO_BIN=/usr/src/fio-static/fio 00:00:34.805 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:34.805 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:34.805 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:34.805 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:34.805 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:34.805 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:34.805 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:34.805 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:34.805 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:34.805 Test configuration: 00:00:34.805 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:34.805 SPDK_TEST_NVMF=1 00:00:34.805 SPDK_TEST_NVME_CLI=1 00:00:34.805 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:34.805 SPDK_TEST_NVMF_NICS=e810 00:00:34.805 SPDK_TEST_VFIOUSER=1 00:00:34.805 SPDK_RUN_UBSAN=1 00:00:34.805 NET_TYPE=phy 00:00:34.805 RUN_NIGHTLY=0 11:05:00 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:34.805 11:05:00 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:34.805 11:05:00 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:34.805 11:05:00 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:34.805 11:05:00 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:34.806 11:05:00 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:34.806 11:05:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:34.806 11:05:00 -- paths/export.sh@5 -- $ export PATH 00:00:34.806 11:05:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:34.806 11:05:00 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:34.806 11:05:00 -- common/autobuild_common.sh@444 -- $ date +%s 00:00:34.806 11:05:00 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720775100.XXXXXX 00:00:34.806 11:05:00 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720775100.cMo971 00:00:34.806 11:05:00 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:00:34.806 11:05:00 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:00:34.806 11:05:00 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:34.806 11:05:00 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:34.806 11:05:00 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:34.806 11:05:00 -- common/autobuild_common.sh@460 -- $ get_config_params 00:00:34.806 11:05:00 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:34.806 11:05:00 -- common/autotest_common.sh@10 -- $ set +x 00:00:34.806 11:05:00 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:34.806 11:05:00 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:00:34.806 11:05:00 -- pm/common@17 -- $ local monitor 00:00:34.806 11:05:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:34.806 11:05:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:34.806 11:05:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:34.806 11:05:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:34.806 11:05:00 -- pm/common@21 -- $ date +%s 00:00:34.806 11:05:00 -- pm/common@25 -- $ sleep 1 00:00:34.806 11:05:00 -- pm/common@21 -- $ date +%s 00:00:34.806 11:05:00 -- pm/common@21 -- $ date +%s 00:00:34.806 11:05:00 -- pm/common@21 -- $ date +%s 00:00:34.806 11:05:00 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720775100 00:00:34.806 11:05:00 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720775100 00:00:34.806 11:05:00 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720775100 00:00:34.806 11:05:00 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720775100 00:00:35.065 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720775100_collect-vmstat.pm.log 00:00:35.065 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720775100_collect-cpu-temp.pm.log 00:00:35.065 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720775100_collect-cpu-load.pm.log 00:00:35.065 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720775100_collect-bmc-pm.bmc.pm.log 00:00:36.002 11:05:01 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:00:36.002 11:05:01 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:36.002 11:05:01 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:36.002 11:05:01 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:36.002 11:05:01 -- spdk/autobuild.sh@16 -- $ date -u 00:00:36.002 Fri Jul 12 09:05:01 AM UTC 2024 00:00:36.002 11:05:01 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:36.002 v24.09-pre-201-ga7a09b9a0 00:00:36.002 11:05:01 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:36.002 11:05:01 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:36.002 11:05:01 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:36.002 11:05:01 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:36.002 11:05:01 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:36.002 11:05:01 -- common/autotest_common.sh@10 -- $ set +x 00:00:36.002 ************************************ 00:00:36.002 START TEST ubsan 00:00:36.002 ************************************ 00:00:36.002 11:05:01 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:00:36.002 using ubsan 00:00:36.002 00:00:36.002 real 0m0.000s 00:00:36.002 user 0m0.000s 00:00:36.002 sys 0m0.000s 00:00:36.002 11:05:01 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:36.002 11:05:01 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:36.002 ************************************ 00:00:36.002 END TEST ubsan 00:00:36.002 ************************************ 00:00:36.002 11:05:01 -- common/autotest_common.sh@1142 -- $ return 0 00:00:36.002 11:05:01 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:36.002 11:05:01 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:36.002 11:05:01 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:36.002 11:05:01 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:36.002 11:05:01 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:36.002 11:05:01 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:36.002 11:05:01 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:36.002 11:05:01 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:36.002 11:05:01 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:36.002 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:36.002 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:36.262 Using 'verbs' RDMA provider 00:00:47.174 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:00:57.147 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:00:57.147 Creating mk/config.mk...done. 00:00:57.147 Creating mk/cc.flags.mk...done. 00:00:57.147 Type 'make' to build. 00:00:57.147 11:05:22 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:00:57.147 11:05:22 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:57.147 11:05:22 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:57.147 11:05:22 -- common/autotest_common.sh@10 -- $ set +x 00:00:57.147 ************************************ 00:00:57.147 START TEST make 00:00:57.147 ************************************ 00:00:57.147 11:05:22 make -- common/autotest_common.sh@1123 -- $ make -j48 00:00:57.147 make[1]: Nothing to be done for 'all'. 00:00:58.547 The Meson build system 00:00:58.547 Version: 1.3.1 00:00:58.547 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:00:58.547 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:00:58.547 Build type: native build 00:00:58.547 Project name: libvfio-user 00:00:58.547 Project version: 0.0.1 00:00:58.547 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:00:58.547 C linker for the host machine: cc ld.bfd 2.39-16 00:00:58.547 Host machine cpu family: x86_64 00:00:58.547 Host machine cpu: x86_64 00:00:58.547 Run-time dependency threads found: YES 00:00:58.547 Library dl found: YES 00:00:58.547 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:00:58.547 Run-time dependency json-c found: YES 0.17 00:00:58.547 Run-time dependency cmocka found: YES 1.1.7 00:00:58.547 Program pytest-3 found: NO 00:00:58.547 Program flake8 found: NO 00:00:58.547 Program misspell-fixer found: NO 00:00:58.547 Program restructuredtext-lint found: NO 00:00:58.547 Program valgrind found: YES (/usr/bin/valgrind) 00:00:58.547 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:00:58.547 Compiler for C supports arguments -Wmissing-declarations: YES 00:00:58.547 Compiler for C supports arguments -Wwrite-strings: YES 00:00:58.547 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:00:58.547 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:00:58.547 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:00:58.547 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:00:58.547 Build targets in project: 8 00:00:58.547 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:00:58.547 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:00:58.547 00:00:58.547 libvfio-user 0.0.1 00:00:58.547 00:00:58.547 User defined options 00:00:58.547 buildtype : debug 00:00:58.547 default_library: shared 00:00:58.547 libdir : /usr/local/lib 00:00:58.547 00:00:58.547 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:00:59.500 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:00:59.763 [1/37] Compiling C object samples/null.p/null.c.o 00:00:59.763 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:00:59.763 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:00:59.763 [4/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:00:59.763 [5/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:00:59.763 [6/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:00:59.763 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:00:59.763 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:00:59.763 [9/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:00:59.763 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:00:59.763 [11/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:00:59.763 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:00:59.763 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:00:59.763 [14/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:00:59.763 [15/37] Compiling C object samples/lspci.p/lspci.c.o 00:00:59.763 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:00:59.763 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:00:59.763 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:00:59.763 [19/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:00:59.763 [20/37] Compiling C object samples/client.p/client.c.o 00:00:59.763 [21/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:00:59.763 [22/37] Compiling C object samples/server.p/server.c.o 00:01:00.026 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:00.026 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:00.026 [25/37] Linking target samples/client 00:01:00.026 [26/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:00.026 [27/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:00.026 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:00.026 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:00.026 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:00.026 [31/37] Linking target test/unit_tests 00:01:00.287 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:00.287 [33/37] Linking target samples/null 00:01:00.551 [34/37] Linking target samples/server 00:01:00.551 [35/37] Linking target samples/gpio-pci-idio-16 00:01:00.551 [36/37] Linking target samples/shadow_ioeventfd_server 00:01:00.551 [37/37] Linking target samples/lspci 00:01:00.551 INFO: autodetecting backend as ninja 00:01:00.551 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:00.551 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:01.125 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:01.125 ninja: no work to do. 00:01:06.396 The Meson build system 00:01:06.396 Version: 1.3.1 00:01:06.396 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:06.396 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:06.396 Build type: native build 00:01:06.396 Program cat found: YES (/usr/bin/cat) 00:01:06.396 Project name: DPDK 00:01:06.396 Project version: 24.03.0 00:01:06.396 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:06.396 C linker for the host machine: cc ld.bfd 2.39-16 00:01:06.396 Host machine cpu family: x86_64 00:01:06.396 Host machine cpu: x86_64 00:01:06.396 Message: ## Building in Developer Mode ## 00:01:06.396 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:06.396 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:06.396 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:06.396 Program python3 found: YES (/usr/bin/python3) 00:01:06.396 Program cat found: YES (/usr/bin/cat) 00:01:06.396 Compiler for C supports arguments -march=native: YES 00:01:06.396 Checking for size of "void *" : 8 00:01:06.396 Checking for size of "void *" : 8 (cached) 00:01:06.396 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:06.396 Library m found: YES 00:01:06.396 Library numa found: YES 00:01:06.396 Has header "numaif.h" : YES 00:01:06.396 Library fdt found: NO 00:01:06.396 Library execinfo found: NO 00:01:06.396 Has header "execinfo.h" : YES 00:01:06.396 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:06.396 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:06.396 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:06.396 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:06.396 Run-time dependency openssl found: YES 3.0.9 00:01:06.396 Run-time dependency libpcap found: YES 1.10.4 00:01:06.396 Has header "pcap.h" with dependency libpcap: YES 00:01:06.397 Compiler for C supports arguments -Wcast-qual: YES 00:01:06.397 Compiler for C supports arguments -Wdeprecated: YES 00:01:06.397 Compiler for C supports arguments -Wformat: YES 00:01:06.397 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:06.397 Compiler for C supports arguments -Wformat-security: NO 00:01:06.397 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:06.397 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:06.397 Compiler for C supports arguments -Wnested-externs: YES 00:01:06.397 Compiler for C supports arguments -Wold-style-definition: YES 00:01:06.397 Compiler for C supports arguments -Wpointer-arith: YES 00:01:06.397 Compiler for C supports arguments -Wsign-compare: YES 00:01:06.397 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:06.397 Compiler for C supports arguments -Wundef: YES 00:01:06.397 Compiler for C supports arguments -Wwrite-strings: YES 00:01:06.397 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:06.397 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:06.397 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:06.397 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:06.397 Program objdump found: YES (/usr/bin/objdump) 00:01:06.397 Compiler for C supports arguments -mavx512f: YES 00:01:06.397 Checking if "AVX512 checking" compiles: YES 00:01:06.397 Fetching value of define "__SSE4_2__" : 1 00:01:06.397 Fetching value of define "__AES__" : 1 00:01:06.397 Fetching value of define "__AVX__" : 1 00:01:06.397 Fetching value of define "__AVX2__" : (undefined) 00:01:06.397 Fetching value of define "__AVX512BW__" : (undefined) 00:01:06.397 Fetching value of define "__AVX512CD__" : (undefined) 00:01:06.397 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:06.397 Fetching value of define "__AVX512F__" : (undefined) 00:01:06.397 Fetching value of define "__AVX512VL__" : (undefined) 00:01:06.397 Fetching value of define "__PCLMUL__" : 1 00:01:06.397 Fetching value of define "__RDRND__" : 1 00:01:06.397 Fetching value of define "__RDSEED__" : (undefined) 00:01:06.397 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:06.397 Fetching value of define "__znver1__" : (undefined) 00:01:06.397 Fetching value of define "__znver2__" : (undefined) 00:01:06.397 Fetching value of define "__znver3__" : (undefined) 00:01:06.397 Fetching value of define "__znver4__" : (undefined) 00:01:06.397 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:06.397 Message: lib/log: Defining dependency "log" 00:01:06.397 Message: lib/kvargs: Defining dependency "kvargs" 00:01:06.397 Message: lib/telemetry: Defining dependency "telemetry" 00:01:06.397 Checking for function "getentropy" : NO 00:01:06.397 Message: lib/eal: Defining dependency "eal" 00:01:06.397 Message: lib/ring: Defining dependency "ring" 00:01:06.397 Message: lib/rcu: Defining dependency "rcu" 00:01:06.397 Message: lib/mempool: Defining dependency "mempool" 00:01:06.397 Message: lib/mbuf: Defining dependency "mbuf" 00:01:06.397 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:06.397 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:06.397 Compiler for C supports arguments -mpclmul: YES 00:01:06.397 Compiler for C supports arguments -maes: YES 00:01:06.397 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:06.397 Compiler for C supports arguments -mavx512bw: YES 00:01:06.397 Compiler for C supports arguments -mavx512dq: YES 00:01:06.397 Compiler for C supports arguments -mavx512vl: YES 00:01:06.397 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:06.397 Compiler for C supports arguments -mavx2: YES 00:01:06.397 Compiler for C supports arguments -mavx: YES 00:01:06.397 Message: lib/net: Defining dependency "net" 00:01:06.397 Message: lib/meter: Defining dependency "meter" 00:01:06.397 Message: lib/ethdev: Defining dependency "ethdev" 00:01:06.397 Message: lib/pci: Defining dependency "pci" 00:01:06.397 Message: lib/cmdline: Defining dependency "cmdline" 00:01:06.397 Message: lib/hash: Defining dependency "hash" 00:01:06.397 Message: lib/timer: Defining dependency "timer" 00:01:06.397 Message: lib/compressdev: Defining dependency "compressdev" 00:01:06.397 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:06.397 Message: lib/dmadev: Defining dependency "dmadev" 00:01:06.397 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:06.397 Message: lib/power: Defining dependency "power" 00:01:06.397 Message: lib/reorder: Defining dependency "reorder" 00:01:06.397 Message: lib/security: Defining dependency "security" 00:01:06.397 Has header "linux/userfaultfd.h" : YES 00:01:06.397 Has header "linux/vduse.h" : YES 00:01:06.397 Message: lib/vhost: Defining dependency "vhost" 00:01:06.397 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:06.397 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:06.397 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:06.397 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:06.397 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:06.397 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:06.397 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:06.397 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:06.397 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:06.397 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:06.397 Program doxygen found: YES (/usr/bin/doxygen) 00:01:06.397 Configuring doxy-api-html.conf using configuration 00:01:06.397 Configuring doxy-api-man.conf using configuration 00:01:06.397 Program mandb found: YES (/usr/bin/mandb) 00:01:06.397 Program sphinx-build found: NO 00:01:06.397 Configuring rte_build_config.h using configuration 00:01:06.397 Message: 00:01:06.397 ================= 00:01:06.397 Applications Enabled 00:01:06.397 ================= 00:01:06.397 00:01:06.397 apps: 00:01:06.397 00:01:06.397 00:01:06.397 Message: 00:01:06.397 ================= 00:01:06.397 Libraries Enabled 00:01:06.397 ================= 00:01:06.397 00:01:06.397 libs: 00:01:06.397 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:06.397 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:06.397 cryptodev, dmadev, power, reorder, security, vhost, 00:01:06.397 00:01:06.397 Message: 00:01:06.397 =============== 00:01:06.397 Drivers Enabled 00:01:06.397 =============== 00:01:06.397 00:01:06.397 common: 00:01:06.397 00:01:06.397 bus: 00:01:06.397 pci, vdev, 00:01:06.397 mempool: 00:01:06.397 ring, 00:01:06.397 dma: 00:01:06.397 00:01:06.397 net: 00:01:06.397 00:01:06.397 crypto: 00:01:06.397 00:01:06.397 compress: 00:01:06.397 00:01:06.397 vdpa: 00:01:06.397 00:01:06.397 00:01:06.397 Message: 00:01:06.397 ================= 00:01:06.397 Content Skipped 00:01:06.397 ================= 00:01:06.397 00:01:06.397 apps: 00:01:06.397 dumpcap: explicitly disabled via build config 00:01:06.397 graph: explicitly disabled via build config 00:01:06.397 pdump: explicitly disabled via build config 00:01:06.397 proc-info: explicitly disabled via build config 00:01:06.397 test-acl: explicitly disabled via build config 00:01:06.397 test-bbdev: explicitly disabled via build config 00:01:06.397 test-cmdline: explicitly disabled via build config 00:01:06.397 test-compress-perf: explicitly disabled via build config 00:01:06.397 test-crypto-perf: explicitly disabled via build config 00:01:06.397 test-dma-perf: explicitly disabled via build config 00:01:06.397 test-eventdev: explicitly disabled via build config 00:01:06.397 test-fib: explicitly disabled via build config 00:01:06.397 test-flow-perf: explicitly disabled via build config 00:01:06.397 test-gpudev: explicitly disabled via build config 00:01:06.397 test-mldev: explicitly disabled via build config 00:01:06.397 test-pipeline: explicitly disabled via build config 00:01:06.397 test-pmd: explicitly disabled via build config 00:01:06.397 test-regex: explicitly disabled via build config 00:01:06.397 test-sad: explicitly disabled via build config 00:01:06.397 test-security-perf: explicitly disabled via build config 00:01:06.397 00:01:06.397 libs: 00:01:06.397 argparse: explicitly disabled via build config 00:01:06.397 metrics: explicitly disabled via build config 00:01:06.397 acl: explicitly disabled via build config 00:01:06.397 bbdev: explicitly disabled via build config 00:01:06.397 bitratestats: explicitly disabled via build config 00:01:06.397 bpf: explicitly disabled via build config 00:01:06.397 cfgfile: explicitly disabled via build config 00:01:06.397 distributor: explicitly disabled via build config 00:01:06.397 efd: explicitly disabled via build config 00:01:06.397 eventdev: explicitly disabled via build config 00:01:06.397 dispatcher: explicitly disabled via build config 00:01:06.397 gpudev: explicitly disabled via build config 00:01:06.397 gro: explicitly disabled via build config 00:01:06.397 gso: explicitly disabled via build config 00:01:06.397 ip_frag: explicitly disabled via build config 00:01:06.397 jobstats: explicitly disabled via build config 00:01:06.397 latencystats: explicitly disabled via build config 00:01:06.397 lpm: explicitly disabled via build config 00:01:06.397 member: explicitly disabled via build config 00:01:06.397 pcapng: explicitly disabled via build config 00:01:06.397 rawdev: explicitly disabled via build config 00:01:06.397 regexdev: explicitly disabled via build config 00:01:06.397 mldev: explicitly disabled via build config 00:01:06.397 rib: explicitly disabled via build config 00:01:06.397 sched: explicitly disabled via build config 00:01:06.397 stack: explicitly disabled via build config 00:01:06.397 ipsec: explicitly disabled via build config 00:01:06.397 pdcp: explicitly disabled via build config 00:01:06.397 fib: explicitly disabled via build config 00:01:06.397 port: explicitly disabled via build config 00:01:06.397 pdump: explicitly disabled via build config 00:01:06.397 table: explicitly disabled via build config 00:01:06.397 pipeline: explicitly disabled via build config 00:01:06.397 graph: explicitly disabled via build config 00:01:06.397 node: explicitly disabled via build config 00:01:06.397 00:01:06.397 drivers: 00:01:06.397 common/cpt: not in enabled drivers build config 00:01:06.397 common/dpaax: not in enabled drivers build config 00:01:06.397 common/iavf: not in enabled drivers build config 00:01:06.397 common/idpf: not in enabled drivers build config 00:01:06.397 common/ionic: not in enabled drivers build config 00:01:06.398 common/mvep: not in enabled drivers build config 00:01:06.398 common/octeontx: not in enabled drivers build config 00:01:06.398 bus/auxiliary: not in enabled drivers build config 00:01:06.398 bus/cdx: not in enabled drivers build config 00:01:06.398 bus/dpaa: not in enabled drivers build config 00:01:06.398 bus/fslmc: not in enabled drivers build config 00:01:06.398 bus/ifpga: not in enabled drivers build config 00:01:06.398 bus/platform: not in enabled drivers build config 00:01:06.398 bus/uacce: not in enabled drivers build config 00:01:06.398 bus/vmbus: not in enabled drivers build config 00:01:06.398 common/cnxk: not in enabled drivers build config 00:01:06.398 common/mlx5: not in enabled drivers build config 00:01:06.398 common/nfp: not in enabled drivers build config 00:01:06.398 common/nitrox: not in enabled drivers build config 00:01:06.398 common/qat: not in enabled drivers build config 00:01:06.398 common/sfc_efx: not in enabled drivers build config 00:01:06.398 mempool/bucket: not in enabled drivers build config 00:01:06.398 mempool/cnxk: not in enabled drivers build config 00:01:06.398 mempool/dpaa: not in enabled drivers build config 00:01:06.398 mempool/dpaa2: not in enabled drivers build config 00:01:06.398 mempool/octeontx: not in enabled drivers build config 00:01:06.398 mempool/stack: not in enabled drivers build config 00:01:06.398 dma/cnxk: not in enabled drivers build config 00:01:06.398 dma/dpaa: not in enabled drivers build config 00:01:06.398 dma/dpaa2: not in enabled drivers build config 00:01:06.398 dma/hisilicon: not in enabled drivers build config 00:01:06.398 dma/idxd: not in enabled drivers build config 00:01:06.398 dma/ioat: not in enabled drivers build config 00:01:06.398 dma/skeleton: not in enabled drivers build config 00:01:06.398 net/af_packet: not in enabled drivers build config 00:01:06.398 net/af_xdp: not in enabled drivers build config 00:01:06.398 net/ark: not in enabled drivers build config 00:01:06.398 net/atlantic: not in enabled drivers build config 00:01:06.398 net/avp: not in enabled drivers build config 00:01:06.398 net/axgbe: not in enabled drivers build config 00:01:06.398 net/bnx2x: not in enabled drivers build config 00:01:06.398 net/bnxt: not in enabled drivers build config 00:01:06.398 net/bonding: not in enabled drivers build config 00:01:06.398 net/cnxk: not in enabled drivers build config 00:01:06.398 net/cpfl: not in enabled drivers build config 00:01:06.398 net/cxgbe: not in enabled drivers build config 00:01:06.398 net/dpaa: not in enabled drivers build config 00:01:06.398 net/dpaa2: not in enabled drivers build config 00:01:06.398 net/e1000: not in enabled drivers build config 00:01:06.398 net/ena: not in enabled drivers build config 00:01:06.398 net/enetc: not in enabled drivers build config 00:01:06.398 net/enetfec: not in enabled drivers build config 00:01:06.398 net/enic: not in enabled drivers build config 00:01:06.398 net/failsafe: not in enabled drivers build config 00:01:06.398 net/fm10k: not in enabled drivers build config 00:01:06.398 net/gve: not in enabled drivers build config 00:01:06.398 net/hinic: not in enabled drivers build config 00:01:06.398 net/hns3: not in enabled drivers build config 00:01:06.398 net/i40e: not in enabled drivers build config 00:01:06.398 net/iavf: not in enabled drivers build config 00:01:06.398 net/ice: not in enabled drivers build config 00:01:06.398 net/idpf: not in enabled drivers build config 00:01:06.398 net/igc: not in enabled drivers build config 00:01:06.398 net/ionic: not in enabled drivers build config 00:01:06.398 net/ipn3ke: not in enabled drivers build config 00:01:06.398 net/ixgbe: not in enabled drivers build config 00:01:06.398 net/mana: not in enabled drivers build config 00:01:06.398 net/memif: not in enabled drivers build config 00:01:06.398 net/mlx4: not in enabled drivers build config 00:01:06.398 net/mlx5: not in enabled drivers build config 00:01:06.398 net/mvneta: not in enabled drivers build config 00:01:06.398 net/mvpp2: not in enabled drivers build config 00:01:06.398 net/netvsc: not in enabled drivers build config 00:01:06.398 net/nfb: not in enabled drivers build config 00:01:06.398 net/nfp: not in enabled drivers build config 00:01:06.398 net/ngbe: not in enabled drivers build config 00:01:06.398 net/null: not in enabled drivers build config 00:01:06.398 net/octeontx: not in enabled drivers build config 00:01:06.398 net/octeon_ep: not in enabled drivers build config 00:01:06.398 net/pcap: not in enabled drivers build config 00:01:06.398 net/pfe: not in enabled drivers build config 00:01:06.398 net/qede: not in enabled drivers build config 00:01:06.398 net/ring: not in enabled drivers build config 00:01:06.398 net/sfc: not in enabled drivers build config 00:01:06.398 net/softnic: not in enabled drivers build config 00:01:06.398 net/tap: not in enabled drivers build config 00:01:06.398 net/thunderx: not in enabled drivers build config 00:01:06.398 net/txgbe: not in enabled drivers build config 00:01:06.398 net/vdev_netvsc: not in enabled drivers build config 00:01:06.398 net/vhost: not in enabled drivers build config 00:01:06.398 net/virtio: not in enabled drivers build config 00:01:06.398 net/vmxnet3: not in enabled drivers build config 00:01:06.398 raw/*: missing internal dependency, "rawdev" 00:01:06.398 crypto/armv8: not in enabled drivers build config 00:01:06.398 crypto/bcmfs: not in enabled drivers build config 00:01:06.398 crypto/caam_jr: not in enabled drivers build config 00:01:06.398 crypto/ccp: not in enabled drivers build config 00:01:06.398 crypto/cnxk: not in enabled drivers build config 00:01:06.398 crypto/dpaa_sec: not in enabled drivers build config 00:01:06.398 crypto/dpaa2_sec: not in enabled drivers build config 00:01:06.398 crypto/ipsec_mb: not in enabled drivers build config 00:01:06.398 crypto/mlx5: not in enabled drivers build config 00:01:06.398 crypto/mvsam: not in enabled drivers build config 00:01:06.398 crypto/nitrox: not in enabled drivers build config 00:01:06.398 crypto/null: not in enabled drivers build config 00:01:06.398 crypto/octeontx: not in enabled drivers build config 00:01:06.398 crypto/openssl: not in enabled drivers build config 00:01:06.398 crypto/scheduler: not in enabled drivers build config 00:01:06.398 crypto/uadk: not in enabled drivers build config 00:01:06.398 crypto/virtio: not in enabled drivers build config 00:01:06.398 compress/isal: not in enabled drivers build config 00:01:06.398 compress/mlx5: not in enabled drivers build config 00:01:06.398 compress/nitrox: not in enabled drivers build config 00:01:06.398 compress/octeontx: not in enabled drivers build config 00:01:06.398 compress/zlib: not in enabled drivers build config 00:01:06.398 regex/*: missing internal dependency, "regexdev" 00:01:06.398 ml/*: missing internal dependency, "mldev" 00:01:06.398 vdpa/ifc: not in enabled drivers build config 00:01:06.398 vdpa/mlx5: not in enabled drivers build config 00:01:06.398 vdpa/nfp: not in enabled drivers build config 00:01:06.398 vdpa/sfc: not in enabled drivers build config 00:01:06.398 event/*: missing internal dependency, "eventdev" 00:01:06.398 baseband/*: missing internal dependency, "bbdev" 00:01:06.398 gpu/*: missing internal dependency, "gpudev" 00:01:06.398 00:01:06.398 00:01:06.398 Build targets in project: 85 00:01:06.398 00:01:06.398 DPDK 24.03.0 00:01:06.398 00:01:06.398 User defined options 00:01:06.398 buildtype : debug 00:01:06.398 default_library : shared 00:01:06.398 libdir : lib 00:01:06.398 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:06.398 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:06.398 c_link_args : 00:01:06.398 cpu_instruction_set: native 00:01:06.398 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:06.398 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:06.398 enable_docs : false 00:01:06.398 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:06.398 enable_kmods : false 00:01:06.398 max_lcores : 128 00:01:06.398 tests : false 00:01:06.398 00:01:06.398 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:06.398 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:06.398 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:06.398 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:06.398 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:06.398 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:06.398 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:06.398 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:06.398 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:06.398 [8/268] Linking static target lib/librte_kvargs.a 00:01:06.398 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:06.398 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:06.660 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:06.660 [12/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:06.660 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:06.660 [14/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:06.660 [15/268] Linking static target lib/librte_log.a 00:01:06.660 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:07.231 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.231 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:07.231 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:07.231 [20/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:07.231 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:07.231 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:07.231 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:07.231 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:07.494 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:07.494 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:07.494 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:07.494 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:07.494 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:07.494 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:07.494 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:07.494 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:07.494 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:07.494 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:07.494 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:07.494 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:07.494 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:07.494 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:07.494 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:07.494 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:07.494 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:07.494 [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:07.494 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:07.494 [44/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:07.494 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:07.495 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:07.495 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:07.495 [48/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:07.495 [49/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:07.495 [50/268] Linking static target lib/librte_telemetry.a 00:01:07.495 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:07.495 [52/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:07.495 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:07.495 [54/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:07.495 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:07.495 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:07.495 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:07.495 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:07.495 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:07.755 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:07.755 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:07.755 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:07.755 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:07.755 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:07.755 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:07.755 [66/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.015 [67/268] Linking target lib/librte_log.so.24.1 00:01:08.015 [68/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:08.015 [69/268] Linking static target lib/librte_pci.a 00:01:08.015 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:08.015 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:08.015 [72/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:08.279 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:08.279 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:08.279 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:08.279 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:08.279 [77/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:08.279 [78/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:08.279 [79/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:08.279 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:08.279 [81/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:08.279 [82/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:08.279 [83/268] Linking target lib/librte_kvargs.so.24.1 00:01:08.279 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:08.279 [85/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:08.279 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:08.279 [87/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:08.279 [88/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:08.279 [89/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:08.279 [90/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:08.279 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:08.279 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:08.279 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:08.537 [94/268] Linking static target lib/librte_meter.a 00:01:08.537 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:08.537 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:08.537 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:08.537 [98/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:08.537 [99/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.537 [100/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:08.537 [101/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:08.537 [102/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.537 [103/268] Linking static target lib/librte_ring.a 00:01:08.537 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:08.537 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:08.537 [106/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:08.537 [107/268] Linking target lib/librte_telemetry.so.24.1 00:01:08.537 [108/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:08.537 [109/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:08.537 [110/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:08.537 [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:08.537 [112/268] Linking static target lib/librte_rcu.a 00:01:08.537 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:08.537 [114/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:08.537 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:08.798 [116/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:08.798 [117/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:08.798 [118/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:08.798 [119/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:08.798 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:08.798 [121/268] Linking static target lib/librte_mempool.a 00:01:08.798 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:08.798 [123/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:08.798 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:08.798 [125/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:08.798 [126/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:08.798 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:08.798 [128/268] Linking static target lib/librte_eal.a 00:01:08.798 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:08.798 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:08.798 [131/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:08.798 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:09.059 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:09.059 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:09.059 [135/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.059 [136/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:09.059 [137/268] Linking static target lib/librte_net.a 00:01:09.059 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:09.059 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:09.059 [140/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.059 [141/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:09.059 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:09.321 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:09.321 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:09.321 [145/268] Linking static target lib/librte_cmdline.a 00:01:09.321 [146/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.321 [147/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:09.321 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:09.321 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:09.321 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:09.321 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:09.321 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:09.321 [153/268] Linking static target lib/librte_timer.a 00:01:09.321 [154/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:09.578 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:09.578 [156/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.578 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:09.578 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:09.578 [159/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:09.578 [160/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:09.578 [161/268] Linking static target lib/librte_dmadev.a 00:01:09.578 [162/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:09.836 [163/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:09.836 [164/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:09.836 [165/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:09.836 [166/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:09.836 [167/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:09.836 [168/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:09.836 [169/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.836 [170/268] Linking static target lib/librte_compressdev.a 00:01:09.836 [171/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:09.836 [172/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.836 [173/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:09.836 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:09.836 [175/268] Linking static target lib/librte_power.a 00:01:09.836 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:09.836 [177/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:09.836 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:10.093 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:10.093 [180/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:10.093 [181/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:10.093 [182/268] Linking static target lib/librte_hash.a 00:01:10.093 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:10.093 [184/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:10.093 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:10.093 [186/268] Linking static target lib/librte_mbuf.a 00:01:10.093 [187/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:10.093 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:10.093 [189/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:10.093 [190/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:10.093 [191/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.093 [192/268] Linking static target lib/librte_reorder.a 00:01:10.093 [193/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:10.093 [194/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.093 [195/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:10.093 [196/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:10.093 [197/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:10.093 [198/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:10.093 [199/268] Linking static target drivers/librte_bus_vdev.a 00:01:10.351 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:10.351 [201/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:10.351 [202/268] Linking static target lib/librte_security.a 00:01:10.351 [203/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.351 [204/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:10.351 [205/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:10.351 [206/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:10.351 [207/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:10.351 [208/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:10.351 [209/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:10.351 [210/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.351 [211/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.351 [212/268] Linking static target drivers/librte_bus_pci.a 00:01:10.351 [213/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:10.351 [214/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.608 [215/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.608 [216/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:10.608 [217/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:10.608 [218/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:10.608 [219/268] Linking static target drivers/librte_mempool_ring.a 00:01:10.608 [220/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.608 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.608 [222/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:10.608 [223/268] Linking static target lib/librte_cryptodev.a 00:01:10.866 [224/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.866 [225/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:10.866 [226/268] Linking static target lib/librte_ethdev.a 00:01:11.801 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.175 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:15.082 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.082 [230/268] Linking target lib/librte_eal.so.24.1 00:01:15.082 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:15.082 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.082 [233/268] Linking target lib/librte_ring.so.24.1 00:01:15.082 [234/268] Linking target lib/librte_meter.so.24.1 00:01:15.082 [235/268] Linking target lib/librte_timer.so.24.1 00:01:15.082 [236/268] Linking target lib/librte_pci.so.24.1 00:01:15.082 [237/268] Linking target lib/librte_dmadev.so.24.1 00:01:15.082 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:15.340 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:15.340 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:15.340 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:15.340 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:15.340 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:15.340 [244/268] Linking target lib/librte_rcu.so.24.1 00:01:15.340 [245/268] Linking target lib/librte_mempool.so.24.1 00:01:15.340 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:15.340 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:15.340 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:15.340 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:15.340 [250/268] Linking target lib/librte_mbuf.so.24.1 00:01:15.598 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:15.598 [252/268] Linking target lib/librte_reorder.so.24.1 00:01:15.598 [253/268] Linking target lib/librte_compressdev.so.24.1 00:01:15.598 [254/268] Linking target lib/librte_net.so.24.1 00:01:15.598 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:01:15.598 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:15.598 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:15.857 [258/268] Linking target lib/librte_hash.so.24.1 00:01:15.857 [259/268] Linking target lib/librte_security.so.24.1 00:01:15.857 [260/268] Linking target lib/librte_cmdline.so.24.1 00:01:15.857 [261/268] Linking target lib/librte_ethdev.so.24.1 00:01:15.857 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:15.857 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:15.857 [264/268] Linking target lib/librte_power.so.24.1 00:01:18.386 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:18.386 [266/268] Linking static target lib/librte_vhost.a 00:01:19.760 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.760 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:19.760 INFO: autodetecting backend as ninja 00:01:19.760 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:20.692 CC lib/log/log.o 00:01:20.692 CC lib/log/log_flags.o 00:01:20.692 CC lib/log/log_deprecated.o 00:01:20.692 CC lib/ut_mock/mock.o 00:01:20.692 CC lib/ut/ut.o 00:01:20.692 LIB libspdk_ut_mock.a 00:01:20.692 LIB libspdk_log.a 00:01:20.692 LIB libspdk_ut.a 00:01:20.692 SO libspdk_ut_mock.so.6.0 00:01:20.692 SO libspdk_ut.so.2.0 00:01:20.692 SO libspdk_log.so.7.0 00:01:20.692 SYMLINK libspdk_ut_mock.so 00:01:20.692 SYMLINK libspdk_ut.so 00:01:20.692 SYMLINK libspdk_log.so 00:01:20.950 CXX lib/trace_parser/trace.o 00:01:20.950 CC lib/dma/dma.o 00:01:20.950 CC lib/ioat/ioat.o 00:01:20.950 CC lib/util/base64.o 00:01:20.950 CC lib/util/bit_array.o 00:01:20.950 CC lib/util/cpuset.o 00:01:20.950 CC lib/util/crc16.o 00:01:20.950 CC lib/util/crc32.o 00:01:20.950 CC lib/util/crc32c.o 00:01:20.950 CC lib/util/crc32_ieee.o 00:01:20.950 CC lib/util/crc64.o 00:01:20.950 CC lib/util/dif.o 00:01:20.950 CC lib/util/fd.o 00:01:20.950 CC lib/util/file.o 00:01:20.950 CC lib/util/hexlify.o 00:01:20.950 CC lib/util/iov.o 00:01:20.950 CC lib/util/math.o 00:01:20.950 CC lib/util/pipe.o 00:01:20.950 CC lib/util/strerror_tls.o 00:01:20.950 CC lib/util/string.o 00:01:20.950 CC lib/util/uuid.o 00:01:20.950 CC lib/util/xor.o 00:01:20.950 CC lib/util/fd_group.o 00:01:20.950 CC lib/util/zipf.o 00:01:20.950 CC lib/vfio_user/host/vfio_user_pci.o 00:01:20.950 CC lib/vfio_user/host/vfio_user.o 00:01:21.207 LIB libspdk_dma.a 00:01:21.207 SO libspdk_dma.so.4.0 00:01:21.207 LIB libspdk_ioat.a 00:01:21.207 SYMLINK libspdk_dma.so 00:01:21.207 SO libspdk_ioat.so.7.0 00:01:21.207 LIB libspdk_vfio_user.a 00:01:21.207 SYMLINK libspdk_ioat.so 00:01:21.207 SO libspdk_vfio_user.so.5.0 00:01:21.465 SYMLINK libspdk_vfio_user.so 00:01:21.465 LIB libspdk_util.a 00:01:21.465 SO libspdk_util.so.9.1 00:01:21.724 SYMLINK libspdk_util.so 00:01:21.724 CC lib/conf/conf.o 00:01:21.724 CC lib/json/json_parse.o 00:01:21.724 CC lib/env_dpdk/env.o 00:01:21.724 CC lib/json/json_util.o 00:01:21.724 CC lib/env_dpdk/memory.o 00:01:21.724 CC lib/env_dpdk/pci.o 00:01:21.724 CC lib/json/json_write.o 00:01:21.724 CC lib/vmd/vmd.o 00:01:21.724 CC lib/env_dpdk/init.o 00:01:21.724 CC lib/idxd/idxd.o 00:01:21.724 CC lib/rdma_utils/rdma_utils.o 00:01:21.724 CC lib/env_dpdk/threads.o 00:01:21.724 CC lib/rdma_provider/common.o 00:01:21.724 CC lib/idxd/idxd_user.o 00:01:21.724 CC lib/env_dpdk/pci_ioat.o 00:01:21.724 CC lib/vmd/led.o 00:01:21.724 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:21.724 CC lib/idxd/idxd_kernel.o 00:01:21.724 CC lib/env_dpdk/pci_virtio.o 00:01:21.724 CC lib/env_dpdk/pci_vmd.o 00:01:21.724 CC lib/env_dpdk/pci_idxd.o 00:01:21.724 CC lib/env_dpdk/pci_event.o 00:01:21.724 CC lib/env_dpdk/sigbus_handler.o 00:01:21.724 CC lib/env_dpdk/pci_dpdk.o 00:01:21.724 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:21.724 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:21.982 LIB libspdk_trace_parser.a 00:01:21.982 SO libspdk_trace_parser.so.6.0 00:01:21.982 SYMLINK libspdk_trace_parser.so 00:01:21.982 LIB libspdk_rdma_provider.a 00:01:21.982 SO libspdk_rdma_provider.so.6.0 00:01:22.240 LIB libspdk_rdma_utils.a 00:01:22.240 SYMLINK libspdk_rdma_provider.so 00:01:22.240 LIB libspdk_json.a 00:01:22.240 SO libspdk_rdma_utils.so.1.0 00:01:22.240 LIB libspdk_conf.a 00:01:22.240 SO libspdk_json.so.6.0 00:01:22.240 SO libspdk_conf.so.6.0 00:01:22.240 SYMLINK libspdk_rdma_utils.so 00:01:22.240 SYMLINK libspdk_json.so 00:01:22.240 SYMLINK libspdk_conf.so 00:01:22.498 CC lib/jsonrpc/jsonrpc_server.o 00:01:22.498 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:22.498 CC lib/jsonrpc/jsonrpc_client.o 00:01:22.498 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:22.498 LIB libspdk_idxd.a 00:01:22.498 SO libspdk_idxd.so.12.0 00:01:22.498 SYMLINK libspdk_idxd.so 00:01:22.498 LIB libspdk_vmd.a 00:01:22.756 SO libspdk_vmd.so.6.0 00:01:22.756 LIB libspdk_jsonrpc.a 00:01:22.756 SYMLINK libspdk_vmd.so 00:01:22.756 SO libspdk_jsonrpc.so.6.0 00:01:22.756 SYMLINK libspdk_jsonrpc.so 00:01:23.014 CC lib/rpc/rpc.o 00:01:23.014 LIB libspdk_rpc.a 00:01:23.272 SO libspdk_rpc.so.6.0 00:01:23.272 SYMLINK libspdk_rpc.so 00:01:23.272 CC lib/notify/notify.o 00:01:23.272 CC lib/notify/notify_rpc.o 00:01:23.272 CC lib/keyring/keyring.o 00:01:23.272 CC lib/trace/trace.o 00:01:23.272 CC lib/keyring/keyring_rpc.o 00:01:23.272 CC lib/trace/trace_flags.o 00:01:23.272 CC lib/trace/trace_rpc.o 00:01:23.530 LIB libspdk_notify.a 00:01:23.530 SO libspdk_notify.so.6.0 00:01:23.530 SYMLINK libspdk_notify.so 00:01:23.530 LIB libspdk_keyring.a 00:01:23.530 LIB libspdk_trace.a 00:01:23.530 SO libspdk_keyring.so.1.0 00:01:23.788 SO libspdk_trace.so.11.0 00:01:23.788 SYMLINK libspdk_keyring.so 00:01:23.788 SYMLINK libspdk_trace.so 00:01:23.788 LIB libspdk_env_dpdk.a 00:01:23.788 CC lib/thread/thread.o 00:01:23.788 CC lib/thread/iobuf.o 00:01:23.788 CC lib/sock/sock.o 00:01:23.788 CC lib/sock/sock_rpc.o 00:01:24.045 SO libspdk_env_dpdk.so.14.1 00:01:24.045 SYMLINK libspdk_env_dpdk.so 00:01:24.303 LIB libspdk_sock.a 00:01:24.303 SO libspdk_sock.so.10.0 00:01:24.303 SYMLINK libspdk_sock.so 00:01:24.561 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:24.561 CC lib/nvme/nvme_ctrlr.o 00:01:24.561 CC lib/nvme/nvme_fabric.o 00:01:24.561 CC lib/nvme/nvme_ns_cmd.o 00:01:24.561 CC lib/nvme/nvme_ns.o 00:01:24.561 CC lib/nvme/nvme_pcie_common.o 00:01:24.561 CC lib/nvme/nvme_pcie.o 00:01:24.561 CC lib/nvme/nvme_qpair.o 00:01:24.561 CC lib/nvme/nvme.o 00:01:24.561 CC lib/nvme/nvme_quirks.o 00:01:24.561 CC lib/nvme/nvme_transport.o 00:01:24.561 CC lib/nvme/nvme_discovery.o 00:01:24.561 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:24.561 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:24.561 CC lib/nvme/nvme_tcp.o 00:01:24.561 CC lib/nvme/nvme_opal.o 00:01:24.561 CC lib/nvme/nvme_io_msg.o 00:01:24.561 CC lib/nvme/nvme_poll_group.o 00:01:24.561 CC lib/nvme/nvme_zns.o 00:01:24.561 CC lib/nvme/nvme_stubs.o 00:01:24.561 CC lib/nvme/nvme_auth.o 00:01:24.561 CC lib/nvme/nvme_cuse.o 00:01:24.561 CC lib/nvme/nvme_vfio_user.o 00:01:24.561 CC lib/nvme/nvme_rdma.o 00:01:25.491 LIB libspdk_thread.a 00:01:25.491 SO libspdk_thread.so.10.1 00:01:25.491 SYMLINK libspdk_thread.so 00:01:25.748 CC lib/accel/accel.o 00:01:25.748 CC lib/virtio/virtio.o 00:01:25.748 CC lib/vfu_tgt/tgt_endpoint.o 00:01:25.748 CC lib/blob/blobstore.o 00:01:25.748 CC lib/init/json_config.o 00:01:25.748 CC lib/accel/accel_rpc.o 00:01:25.749 CC lib/virtio/virtio_vhost_user.o 00:01:25.749 CC lib/blob/request.o 00:01:25.749 CC lib/vfu_tgt/tgt_rpc.o 00:01:25.749 CC lib/init/subsystem.o 00:01:25.749 CC lib/blob/zeroes.o 00:01:25.749 CC lib/virtio/virtio_vfio_user.o 00:01:25.749 CC lib/accel/accel_sw.o 00:01:25.749 CC lib/init/subsystem_rpc.o 00:01:25.749 CC lib/blob/blob_bs_dev.o 00:01:25.749 CC lib/virtio/virtio_pci.o 00:01:25.749 CC lib/init/rpc.o 00:01:26.006 LIB libspdk_init.a 00:01:26.006 SO libspdk_init.so.5.0 00:01:26.006 LIB libspdk_vfu_tgt.a 00:01:26.006 LIB libspdk_virtio.a 00:01:26.006 SYMLINK libspdk_init.so 00:01:26.006 SO libspdk_vfu_tgt.so.3.0 00:01:26.006 SO libspdk_virtio.so.7.0 00:01:26.264 SYMLINK libspdk_vfu_tgt.so 00:01:26.264 SYMLINK libspdk_virtio.so 00:01:26.264 CC lib/event/app.o 00:01:26.264 CC lib/event/reactor.o 00:01:26.264 CC lib/event/log_rpc.o 00:01:26.264 CC lib/event/app_rpc.o 00:01:26.264 CC lib/event/scheduler_static.o 00:01:26.521 LIB libspdk_event.a 00:01:26.778 SO libspdk_event.so.14.0 00:01:26.778 LIB libspdk_accel.a 00:01:26.778 SYMLINK libspdk_event.so 00:01:26.778 SO libspdk_accel.so.15.1 00:01:26.778 SYMLINK libspdk_accel.so 00:01:26.778 LIB libspdk_nvme.a 00:01:27.035 CC lib/bdev/bdev.o 00:01:27.035 CC lib/bdev/bdev_rpc.o 00:01:27.035 CC lib/bdev/bdev_zone.o 00:01:27.035 CC lib/bdev/part.o 00:01:27.035 CC lib/bdev/scsi_nvme.o 00:01:27.035 SO libspdk_nvme.so.13.1 00:01:27.293 SYMLINK libspdk_nvme.so 00:01:28.665 LIB libspdk_blob.a 00:01:28.665 SO libspdk_blob.so.11.0 00:01:28.665 SYMLINK libspdk_blob.so 00:01:28.923 CC lib/lvol/lvol.o 00:01:28.923 CC lib/blobfs/blobfs.o 00:01:28.923 CC lib/blobfs/tree.o 00:01:29.488 LIB libspdk_bdev.a 00:01:29.488 SO libspdk_bdev.so.15.1 00:01:29.754 SYMLINK libspdk_bdev.so 00:01:29.754 LIB libspdk_blobfs.a 00:01:29.754 SO libspdk_blobfs.so.10.0 00:01:29.754 CC lib/nbd/nbd.o 00:01:29.754 CC lib/ublk/ublk.o 00:01:29.754 CC lib/nvmf/ctrlr.o 00:01:29.754 CC lib/nbd/nbd_rpc.o 00:01:29.754 CC lib/scsi/dev.o 00:01:29.754 CC lib/ublk/ublk_rpc.o 00:01:29.754 CC lib/nvmf/ctrlr_discovery.o 00:01:29.754 CC lib/scsi/lun.o 00:01:29.754 CC lib/ftl/ftl_core.o 00:01:29.754 CC lib/scsi/port.o 00:01:29.754 CC lib/nvmf/ctrlr_bdev.o 00:01:29.754 CC lib/ftl/ftl_init.o 00:01:29.754 CC lib/nvmf/subsystem.o 00:01:29.754 CC lib/scsi/scsi.o 00:01:29.755 CC lib/ftl/ftl_layout.o 00:01:29.755 CC lib/scsi/scsi_bdev.o 00:01:29.755 CC lib/nvmf/nvmf.o 00:01:29.755 CC lib/scsi/scsi_pr.o 00:01:29.755 CC lib/ftl/ftl_debug.o 00:01:29.755 CC lib/ftl/ftl_io.o 00:01:29.755 CC lib/nvmf/nvmf_rpc.o 00:01:29.755 CC lib/scsi/scsi_rpc.o 00:01:29.755 CC lib/nvmf/transport.o 00:01:29.755 CC lib/scsi/task.o 00:01:29.755 CC lib/ftl/ftl_sb.o 00:01:29.755 CC lib/nvmf/tcp.o 00:01:29.755 CC lib/ftl/ftl_l2p.o 00:01:29.755 CC lib/nvmf/stubs.o 00:01:29.755 CC lib/ftl/ftl_l2p_flat.o 00:01:29.755 CC lib/nvmf/mdns_server.o 00:01:29.755 CC lib/ftl/ftl_nv_cache.o 00:01:29.755 CC lib/nvmf/vfio_user.o 00:01:29.755 CC lib/nvmf/rdma.o 00:01:29.755 CC lib/ftl/ftl_band.o 00:01:29.755 CC lib/nvmf/auth.o 00:01:29.755 CC lib/ftl/ftl_band_ops.o 00:01:29.755 CC lib/ftl/ftl_writer.o 00:01:29.755 CC lib/ftl/ftl_rq.o 00:01:29.755 CC lib/ftl/ftl_reloc.o 00:01:29.755 CC lib/ftl/ftl_l2p_cache.o 00:01:29.755 CC lib/ftl/ftl_p2l.o 00:01:29.755 CC lib/ftl/mngt/ftl_mngt.o 00:01:29.755 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:29.755 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:29.755 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:29.755 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:29.755 SYMLINK libspdk_blobfs.so 00:01:29.755 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:30.013 LIB libspdk_lvol.a 00:01:30.013 SO libspdk_lvol.so.10.0 00:01:30.013 SYMLINK libspdk_lvol.so 00:01:30.013 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:30.275 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:30.275 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:30.275 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:30.275 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:30.275 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:30.275 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:30.275 CC lib/ftl/utils/ftl_conf.o 00:01:30.275 CC lib/ftl/utils/ftl_md.o 00:01:30.275 CC lib/ftl/utils/ftl_mempool.o 00:01:30.275 CC lib/ftl/utils/ftl_bitmap.o 00:01:30.275 CC lib/ftl/utils/ftl_property.o 00:01:30.275 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:30.275 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:30.275 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:30.275 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:30.275 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:30.275 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:30.275 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:30.536 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:30.536 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:30.536 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:30.536 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:30.536 CC lib/ftl/base/ftl_base_dev.o 00:01:30.536 CC lib/ftl/base/ftl_base_bdev.o 00:01:30.536 CC lib/ftl/ftl_trace.o 00:01:30.536 LIB libspdk_nbd.a 00:01:30.536 SO libspdk_nbd.so.7.0 00:01:30.794 SYMLINK libspdk_nbd.so 00:01:30.794 LIB libspdk_scsi.a 00:01:30.794 SO libspdk_scsi.so.9.0 00:01:30.794 LIB libspdk_ublk.a 00:01:30.794 SO libspdk_ublk.so.3.0 00:01:30.794 SYMLINK libspdk_scsi.so 00:01:31.051 SYMLINK libspdk_ublk.so 00:01:31.051 CC lib/vhost/vhost.o 00:01:31.051 CC lib/iscsi/conn.o 00:01:31.051 CC lib/vhost/vhost_rpc.o 00:01:31.051 CC lib/iscsi/init_grp.o 00:01:31.051 CC lib/vhost/vhost_scsi.o 00:01:31.051 CC lib/iscsi/iscsi.o 00:01:31.051 CC lib/vhost/vhost_blk.o 00:01:31.051 CC lib/iscsi/md5.o 00:01:31.051 CC lib/vhost/rte_vhost_user.o 00:01:31.051 CC lib/iscsi/param.o 00:01:31.051 CC lib/iscsi/portal_grp.o 00:01:31.051 CC lib/iscsi/tgt_node.o 00:01:31.051 CC lib/iscsi/iscsi_subsystem.o 00:01:31.051 CC lib/iscsi/iscsi_rpc.o 00:01:31.051 CC lib/iscsi/task.o 00:01:31.308 LIB libspdk_ftl.a 00:01:31.596 SO libspdk_ftl.so.9.0 00:01:31.854 SYMLINK libspdk_ftl.so 00:01:32.418 LIB libspdk_vhost.a 00:01:32.418 SO libspdk_vhost.so.8.0 00:01:32.418 LIB libspdk_nvmf.a 00:01:32.418 SYMLINK libspdk_vhost.so 00:01:32.418 SO libspdk_nvmf.so.18.1 00:01:32.418 LIB libspdk_iscsi.a 00:01:32.418 SO libspdk_iscsi.so.8.0 00:01:32.675 SYMLINK libspdk_nvmf.so 00:01:32.675 SYMLINK libspdk_iscsi.so 00:01:32.933 CC module/env_dpdk/env_dpdk_rpc.o 00:01:32.933 CC module/vfu_device/vfu_virtio.o 00:01:32.933 CC module/vfu_device/vfu_virtio_blk.o 00:01:32.933 CC module/vfu_device/vfu_virtio_scsi.o 00:01:32.933 CC module/vfu_device/vfu_virtio_rpc.o 00:01:32.933 CC module/accel/ioat/accel_ioat.o 00:01:32.933 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:32.933 CC module/accel/error/accel_error.o 00:01:32.933 CC module/blob/bdev/blob_bdev.o 00:01:32.933 CC module/keyring/linux/keyring.o 00:01:32.933 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:32.933 CC module/accel/iaa/accel_iaa.o 00:01:32.933 CC module/accel/dsa/accel_dsa.o 00:01:32.933 CC module/sock/posix/posix.o 00:01:32.933 CC module/keyring/file/keyring.o 00:01:32.933 CC module/accel/ioat/accel_ioat_rpc.o 00:01:32.933 CC module/scheduler/gscheduler/gscheduler.o 00:01:32.933 CC module/keyring/linux/keyring_rpc.o 00:01:32.933 CC module/accel/iaa/accel_iaa_rpc.o 00:01:32.933 CC module/accel/error/accel_error_rpc.o 00:01:32.933 CC module/accel/dsa/accel_dsa_rpc.o 00:01:32.933 CC module/keyring/file/keyring_rpc.o 00:01:33.191 LIB libspdk_env_dpdk_rpc.a 00:01:33.191 SO libspdk_env_dpdk_rpc.so.6.0 00:01:33.191 SYMLINK libspdk_env_dpdk_rpc.so 00:01:33.191 LIB libspdk_keyring_linux.a 00:01:33.191 LIB libspdk_keyring_file.a 00:01:33.191 LIB libspdk_scheduler_gscheduler.a 00:01:33.191 LIB libspdk_scheduler_dpdk_governor.a 00:01:33.191 SO libspdk_keyring_linux.so.1.0 00:01:33.191 SO libspdk_keyring_file.so.1.0 00:01:33.191 SO libspdk_scheduler_gscheduler.so.4.0 00:01:33.191 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:33.191 LIB libspdk_accel_ioat.a 00:01:33.191 LIB libspdk_accel_iaa.a 00:01:33.191 LIB libspdk_accel_error.a 00:01:33.191 SO libspdk_accel_ioat.so.6.0 00:01:33.191 SYMLINK libspdk_keyring_linux.so 00:01:33.191 SYMLINK libspdk_scheduler_gscheduler.so 00:01:33.191 SYMLINK libspdk_keyring_file.so 00:01:33.191 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:33.191 LIB libspdk_scheduler_dynamic.a 00:01:33.191 SO libspdk_accel_error.so.2.0 00:01:33.191 SO libspdk_accel_iaa.so.3.0 00:01:33.449 SO libspdk_scheduler_dynamic.so.4.0 00:01:33.449 LIB libspdk_blob_bdev.a 00:01:33.449 SYMLINK libspdk_accel_ioat.so 00:01:33.449 SYMLINK libspdk_accel_error.so 00:01:33.449 SYMLINK libspdk_accel_iaa.so 00:01:33.449 LIB libspdk_accel_dsa.a 00:01:33.449 SO libspdk_blob_bdev.so.11.0 00:01:33.449 SYMLINK libspdk_scheduler_dynamic.so 00:01:33.449 SO libspdk_accel_dsa.so.5.0 00:01:33.449 SYMLINK libspdk_blob_bdev.so 00:01:33.449 SYMLINK libspdk_accel_dsa.so 00:01:33.709 LIB libspdk_vfu_device.a 00:01:33.709 SO libspdk_vfu_device.so.3.0 00:01:33.709 CC module/bdev/delay/vbdev_delay.o 00:01:33.709 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:33.709 CC module/bdev/error/vbdev_error.o 00:01:33.709 CC module/blobfs/bdev/blobfs_bdev.o 00:01:33.709 CC module/bdev/error/vbdev_error_rpc.o 00:01:33.709 CC module/bdev/lvol/vbdev_lvol.o 00:01:33.709 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:33.709 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:33.709 CC module/bdev/passthru/vbdev_passthru.o 00:01:33.709 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:33.709 CC module/bdev/gpt/gpt.o 00:01:33.709 CC module/bdev/nvme/bdev_nvme.o 00:01:33.709 CC module/bdev/null/bdev_null.o 00:01:33.709 CC module/bdev/gpt/vbdev_gpt.o 00:01:33.709 CC module/bdev/null/bdev_null_rpc.o 00:01:33.709 CC module/bdev/malloc/bdev_malloc.o 00:01:33.709 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:33.709 CC module/bdev/raid/bdev_raid.o 00:01:33.709 CC module/bdev/nvme/bdev_mdns_client.o 00:01:33.709 CC module/bdev/nvme/nvme_rpc.o 00:01:33.709 CC module/bdev/raid/bdev_raid_rpc.o 00:01:33.709 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:33.709 CC module/bdev/split/vbdev_split.o 00:01:33.709 CC module/bdev/ftl/bdev_ftl.o 00:01:33.709 CC module/bdev/nvme/vbdev_opal.o 00:01:33.709 CC module/bdev/raid/bdev_raid_sb.o 00:01:33.709 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:33.709 CC module/bdev/raid/raid0.o 00:01:33.709 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:33.709 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:33.709 CC module/bdev/raid/raid1.o 00:01:33.709 CC module/bdev/split/vbdev_split_rpc.o 00:01:33.709 CC module/bdev/iscsi/bdev_iscsi.o 00:01:33.709 CC module/bdev/raid/concat.o 00:01:33.709 CC module/bdev/aio/bdev_aio.o 00:01:33.709 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:33.709 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:33.709 CC module/bdev/aio/bdev_aio_rpc.o 00:01:33.709 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:33.709 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:33.709 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:33.709 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:33.709 SYMLINK libspdk_vfu_device.so 00:01:33.969 LIB libspdk_sock_posix.a 00:01:33.969 SO libspdk_sock_posix.so.6.0 00:01:33.969 LIB libspdk_bdev_null.a 00:01:33.969 LIB libspdk_blobfs_bdev.a 00:01:33.969 LIB libspdk_bdev_error.a 00:01:33.969 LIB libspdk_bdev_gpt.a 00:01:33.969 SO libspdk_bdev_null.so.6.0 00:01:33.969 SO libspdk_blobfs_bdev.so.6.0 00:01:33.969 SO libspdk_bdev_error.so.6.0 00:01:34.227 SO libspdk_bdev_gpt.so.6.0 00:01:34.227 SYMLINK libspdk_sock_posix.so 00:01:34.227 SYMLINK libspdk_bdev_null.so 00:01:34.227 LIB libspdk_bdev_split.a 00:01:34.227 SYMLINK libspdk_blobfs_bdev.so 00:01:34.227 SYMLINK libspdk_bdev_error.so 00:01:34.227 SO libspdk_bdev_split.so.6.0 00:01:34.227 SYMLINK libspdk_bdev_gpt.so 00:01:34.227 LIB libspdk_bdev_ftl.a 00:01:34.227 LIB libspdk_bdev_passthru.a 00:01:34.227 SO libspdk_bdev_ftl.so.6.0 00:01:34.227 SYMLINK libspdk_bdev_split.so 00:01:34.227 SO libspdk_bdev_passthru.so.6.0 00:01:34.227 LIB libspdk_bdev_aio.a 00:01:34.227 LIB libspdk_bdev_zone_block.a 00:01:34.227 SYMLINK libspdk_bdev_ftl.so 00:01:34.227 SO libspdk_bdev_aio.so.6.0 00:01:34.227 SO libspdk_bdev_zone_block.so.6.0 00:01:34.227 LIB libspdk_bdev_delay.a 00:01:34.227 SYMLINK libspdk_bdev_passthru.so 00:01:34.227 LIB libspdk_bdev_iscsi.a 00:01:34.227 LIB libspdk_bdev_malloc.a 00:01:34.227 SO libspdk_bdev_delay.so.6.0 00:01:34.227 SO libspdk_bdev_iscsi.so.6.0 00:01:34.227 SO libspdk_bdev_malloc.so.6.0 00:01:34.227 SYMLINK libspdk_bdev_zone_block.so 00:01:34.227 SYMLINK libspdk_bdev_aio.so 00:01:34.486 SYMLINK libspdk_bdev_delay.so 00:01:34.486 SYMLINK libspdk_bdev_iscsi.so 00:01:34.486 SYMLINK libspdk_bdev_malloc.so 00:01:34.486 LIB libspdk_bdev_lvol.a 00:01:34.486 LIB libspdk_bdev_virtio.a 00:01:34.486 SO libspdk_bdev_lvol.so.6.0 00:01:34.486 SO libspdk_bdev_virtio.so.6.0 00:01:34.486 SYMLINK libspdk_bdev_lvol.so 00:01:34.486 SYMLINK libspdk_bdev_virtio.so 00:01:34.745 LIB libspdk_bdev_raid.a 00:01:35.004 SO libspdk_bdev_raid.so.6.0 00:01:35.004 SYMLINK libspdk_bdev_raid.so 00:01:35.940 LIB libspdk_bdev_nvme.a 00:01:35.940 SO libspdk_bdev_nvme.so.7.0 00:01:36.198 SYMLINK libspdk_bdev_nvme.so 00:01:36.456 CC module/event/subsystems/sock/sock.o 00:01:36.456 CC module/event/subsystems/keyring/keyring.o 00:01:36.456 CC module/event/subsystems/iobuf/iobuf.o 00:01:36.456 CC module/event/subsystems/vmd/vmd.o 00:01:36.456 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:36.456 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:36.456 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:36.456 CC module/event/subsystems/scheduler/scheduler.o 00:01:36.456 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:36.714 LIB libspdk_event_keyring.a 00:01:36.714 LIB libspdk_event_vhost_blk.a 00:01:36.714 LIB libspdk_event_vfu_tgt.a 00:01:36.714 LIB libspdk_event_scheduler.a 00:01:36.714 LIB libspdk_event_vmd.a 00:01:36.714 LIB libspdk_event_sock.a 00:01:36.714 SO libspdk_event_keyring.so.1.0 00:01:36.714 SO libspdk_event_vhost_blk.so.3.0 00:01:36.714 LIB libspdk_event_iobuf.a 00:01:36.714 SO libspdk_event_vfu_tgt.so.3.0 00:01:36.714 SO libspdk_event_scheduler.so.4.0 00:01:36.714 SO libspdk_event_sock.so.5.0 00:01:36.714 SO libspdk_event_vmd.so.6.0 00:01:36.714 SO libspdk_event_iobuf.so.3.0 00:01:36.714 SYMLINK libspdk_event_keyring.so 00:01:36.714 SYMLINK libspdk_event_vhost_blk.so 00:01:36.714 SYMLINK libspdk_event_vfu_tgt.so 00:01:36.714 SYMLINK libspdk_event_sock.so 00:01:36.714 SYMLINK libspdk_event_scheduler.so 00:01:36.714 SYMLINK libspdk_event_vmd.so 00:01:36.714 SYMLINK libspdk_event_iobuf.so 00:01:36.972 CC module/event/subsystems/accel/accel.o 00:01:36.972 LIB libspdk_event_accel.a 00:01:36.972 SO libspdk_event_accel.so.6.0 00:01:37.230 SYMLINK libspdk_event_accel.so 00:01:37.230 CC module/event/subsystems/bdev/bdev.o 00:01:37.486 LIB libspdk_event_bdev.a 00:01:37.486 SO libspdk_event_bdev.so.6.0 00:01:37.486 SYMLINK libspdk_event_bdev.so 00:01:37.744 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:37.744 CC module/event/subsystems/scsi/scsi.o 00:01:37.744 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:37.744 CC module/event/subsystems/ublk/ublk.o 00:01:37.744 CC module/event/subsystems/nbd/nbd.o 00:01:37.744 LIB libspdk_event_nbd.a 00:01:37.744 LIB libspdk_event_ublk.a 00:01:37.744 LIB libspdk_event_scsi.a 00:01:37.744 SO libspdk_event_nbd.so.6.0 00:01:37.744 SO libspdk_event_ublk.so.3.0 00:01:38.001 SO libspdk_event_scsi.so.6.0 00:01:38.001 SYMLINK libspdk_event_nbd.so 00:01:38.001 SYMLINK libspdk_event_ublk.so 00:01:38.001 SYMLINK libspdk_event_scsi.so 00:01:38.001 LIB libspdk_event_nvmf.a 00:01:38.001 SO libspdk_event_nvmf.so.6.0 00:01:38.001 SYMLINK libspdk_event_nvmf.so 00:01:38.001 CC module/event/subsystems/iscsi/iscsi.o 00:01:38.001 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:38.260 LIB libspdk_event_vhost_scsi.a 00:01:38.260 LIB libspdk_event_iscsi.a 00:01:38.260 SO libspdk_event_vhost_scsi.so.3.0 00:01:38.260 SO libspdk_event_iscsi.so.6.0 00:01:38.260 SYMLINK libspdk_event_vhost_scsi.so 00:01:38.260 SYMLINK libspdk_event_iscsi.so 00:01:38.521 SO libspdk.so.6.0 00:01:38.521 SYMLINK libspdk.so 00:01:38.521 CC app/spdk_nvme_identify/identify.o 00:01:38.521 CC app/trace_record/trace_record.o 00:01:38.521 CXX app/trace/trace.o 00:01:38.521 CC app/spdk_top/spdk_top.o 00:01:38.521 CC app/spdk_nvme_perf/perf.o 00:01:38.521 TEST_HEADER include/spdk/accel.h 00:01:38.521 TEST_HEADER include/spdk/assert.h 00:01:38.521 TEST_HEADER include/spdk/accel_module.h 00:01:38.521 TEST_HEADER include/spdk/barrier.h 00:01:38.521 TEST_HEADER include/spdk/base64.h 00:01:38.521 TEST_HEADER include/spdk/bdev.h 00:01:38.521 CC app/spdk_nvme_discover/discovery_aer.o 00:01:38.521 TEST_HEADER include/spdk/bdev_module.h 00:01:38.521 CC app/spdk_lspci/spdk_lspci.o 00:01:38.521 TEST_HEADER include/spdk/bdev_zone.h 00:01:38.521 CC test/rpc_client/rpc_client_test.o 00:01:38.521 TEST_HEADER include/spdk/bit_array.h 00:01:38.521 TEST_HEADER include/spdk/bit_pool.h 00:01:38.521 TEST_HEADER include/spdk/blob_bdev.h 00:01:38.790 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:38.790 TEST_HEADER include/spdk/blobfs.h 00:01:38.790 TEST_HEADER include/spdk/blob.h 00:01:38.790 TEST_HEADER include/spdk/conf.h 00:01:38.790 TEST_HEADER include/spdk/config.h 00:01:38.790 TEST_HEADER include/spdk/cpuset.h 00:01:38.790 TEST_HEADER include/spdk/crc16.h 00:01:38.790 TEST_HEADER include/spdk/crc32.h 00:01:38.790 TEST_HEADER include/spdk/dif.h 00:01:38.790 TEST_HEADER include/spdk/crc64.h 00:01:38.790 TEST_HEADER include/spdk/dma.h 00:01:38.790 TEST_HEADER include/spdk/endian.h 00:01:38.790 TEST_HEADER include/spdk/env_dpdk.h 00:01:38.790 TEST_HEADER include/spdk/env.h 00:01:38.790 TEST_HEADER include/spdk/event.h 00:01:38.790 TEST_HEADER include/spdk/fd_group.h 00:01:38.790 TEST_HEADER include/spdk/fd.h 00:01:38.790 TEST_HEADER include/spdk/file.h 00:01:38.790 TEST_HEADER include/spdk/ftl.h 00:01:38.790 TEST_HEADER include/spdk/gpt_spec.h 00:01:38.790 TEST_HEADER include/spdk/hexlify.h 00:01:38.790 TEST_HEADER include/spdk/histogram_data.h 00:01:38.790 TEST_HEADER include/spdk/idxd.h 00:01:38.790 TEST_HEADER include/spdk/idxd_spec.h 00:01:38.790 TEST_HEADER include/spdk/init.h 00:01:38.790 TEST_HEADER include/spdk/ioat.h 00:01:38.790 TEST_HEADER include/spdk/ioat_spec.h 00:01:38.790 TEST_HEADER include/spdk/iscsi_spec.h 00:01:38.790 TEST_HEADER include/spdk/jsonrpc.h 00:01:38.790 TEST_HEADER include/spdk/json.h 00:01:38.790 TEST_HEADER include/spdk/keyring.h 00:01:38.790 TEST_HEADER include/spdk/keyring_module.h 00:01:38.790 TEST_HEADER include/spdk/likely.h 00:01:38.790 TEST_HEADER include/spdk/log.h 00:01:38.790 TEST_HEADER include/spdk/lvol.h 00:01:38.790 TEST_HEADER include/spdk/memory.h 00:01:38.790 TEST_HEADER include/spdk/mmio.h 00:01:38.790 TEST_HEADER include/spdk/nbd.h 00:01:38.790 TEST_HEADER include/spdk/notify.h 00:01:38.790 TEST_HEADER include/spdk/nvme.h 00:01:38.790 TEST_HEADER include/spdk/nvme_intel.h 00:01:38.790 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:38.790 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:38.790 TEST_HEADER include/spdk/nvme_spec.h 00:01:38.790 TEST_HEADER include/spdk/nvme_zns.h 00:01:38.790 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:38.790 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:38.790 TEST_HEADER include/spdk/nvmf.h 00:01:38.790 TEST_HEADER include/spdk/nvmf_spec.h 00:01:38.790 TEST_HEADER include/spdk/nvmf_transport.h 00:01:38.790 TEST_HEADER include/spdk/opal.h 00:01:38.790 TEST_HEADER include/spdk/opal_spec.h 00:01:38.790 TEST_HEADER include/spdk/pci_ids.h 00:01:38.790 TEST_HEADER include/spdk/pipe.h 00:01:38.790 TEST_HEADER include/spdk/queue.h 00:01:38.790 TEST_HEADER include/spdk/reduce.h 00:01:38.790 TEST_HEADER include/spdk/rpc.h 00:01:38.790 TEST_HEADER include/spdk/scheduler.h 00:01:38.790 TEST_HEADER include/spdk/scsi.h 00:01:38.790 TEST_HEADER include/spdk/scsi_spec.h 00:01:38.790 TEST_HEADER include/spdk/sock.h 00:01:38.790 TEST_HEADER include/spdk/stdinc.h 00:01:38.790 TEST_HEADER include/spdk/string.h 00:01:38.790 TEST_HEADER include/spdk/thread.h 00:01:38.790 TEST_HEADER include/spdk/trace.h 00:01:38.790 TEST_HEADER include/spdk/trace_parser.h 00:01:38.790 TEST_HEADER include/spdk/tree.h 00:01:38.790 TEST_HEADER include/spdk/ublk.h 00:01:38.790 TEST_HEADER include/spdk/util.h 00:01:38.790 TEST_HEADER include/spdk/uuid.h 00:01:38.790 TEST_HEADER include/spdk/version.h 00:01:38.790 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:38.790 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:38.790 TEST_HEADER include/spdk/vmd.h 00:01:38.790 TEST_HEADER include/spdk/vhost.h 00:01:38.790 TEST_HEADER include/spdk/xor.h 00:01:38.790 TEST_HEADER include/spdk/zipf.h 00:01:38.790 CXX test/cpp_headers/accel.o 00:01:38.790 CXX test/cpp_headers/accel_module.o 00:01:38.790 CXX test/cpp_headers/assert.o 00:01:38.790 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:38.790 CXX test/cpp_headers/barrier.o 00:01:38.790 CXX test/cpp_headers/base64.o 00:01:38.790 CXX test/cpp_headers/bdev.o 00:01:38.790 CXX test/cpp_headers/bdev_module.o 00:01:38.790 CXX test/cpp_headers/bdev_zone.o 00:01:38.790 CXX test/cpp_headers/bit_array.o 00:01:38.790 CXX test/cpp_headers/bit_pool.o 00:01:38.790 CC app/spdk_dd/spdk_dd.o 00:01:38.790 CXX test/cpp_headers/blob_bdev.o 00:01:38.790 CXX test/cpp_headers/blobfs_bdev.o 00:01:38.790 CXX test/cpp_headers/blobfs.o 00:01:38.790 CXX test/cpp_headers/blob.o 00:01:38.790 CXX test/cpp_headers/conf.o 00:01:38.790 CXX test/cpp_headers/config.o 00:01:38.790 CXX test/cpp_headers/cpuset.o 00:01:38.790 CC app/nvmf_tgt/nvmf_main.o 00:01:38.790 CXX test/cpp_headers/crc16.o 00:01:38.790 CC app/iscsi_tgt/iscsi_tgt.o 00:01:38.790 CXX test/cpp_headers/crc32.o 00:01:38.790 CC examples/ioat/perf/perf.o 00:01:38.790 CC app/spdk_tgt/spdk_tgt.o 00:01:38.790 CC examples/util/zipf/zipf.o 00:01:38.790 CC examples/ioat/verify/verify.o 00:01:38.790 CC test/thread/poller_perf/poller_perf.o 00:01:38.790 CC test/env/pci/pci_ut.o 00:01:38.790 CC test/app/stub/stub.o 00:01:38.790 CC test/app/histogram_perf/histogram_perf.o 00:01:38.790 CC app/fio/nvme/fio_plugin.o 00:01:38.790 CC test/app/jsoncat/jsoncat.o 00:01:38.790 CC test/env/memory/memory_ut.o 00:01:38.790 CC test/env/vtophys/vtophys.o 00:01:38.790 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:38.790 CC test/dma/test_dma/test_dma.o 00:01:38.790 CC test/app/bdev_svc/bdev_svc.o 00:01:38.790 CC app/fio/bdev/fio_plugin.o 00:01:39.051 LINK spdk_lspci 00:01:39.051 CC test/env/mem_callbacks/mem_callbacks.o 00:01:39.051 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:39.051 LINK rpc_client_test 00:01:39.051 LINK spdk_nvme_discover 00:01:39.051 LINK jsoncat 00:01:39.051 LINK histogram_perf 00:01:39.051 CXX test/cpp_headers/crc64.o 00:01:39.051 LINK zipf 00:01:39.051 LINK poller_perf 00:01:39.051 LINK vtophys 00:01:39.051 CXX test/cpp_headers/dif.o 00:01:39.051 CXX test/cpp_headers/dma.o 00:01:39.051 LINK nvmf_tgt 00:01:39.051 CXX test/cpp_headers/endian.o 00:01:39.051 CXX test/cpp_headers/env_dpdk.o 00:01:39.051 CXX test/cpp_headers/env.o 00:01:39.051 LINK stub 00:01:39.051 CXX test/cpp_headers/event.o 00:01:39.051 LINK interrupt_tgt 00:01:39.051 CXX test/cpp_headers/fd_group.o 00:01:39.051 CXX test/cpp_headers/fd.o 00:01:39.051 LINK env_dpdk_post_init 00:01:39.320 CXX test/cpp_headers/file.o 00:01:39.320 CXX test/cpp_headers/ftl.o 00:01:39.320 CXX test/cpp_headers/gpt_spec.o 00:01:39.320 LINK spdk_trace_record 00:01:39.320 CXX test/cpp_headers/hexlify.o 00:01:39.320 LINK ioat_perf 00:01:39.320 LINK iscsi_tgt 00:01:39.320 CXX test/cpp_headers/histogram_data.o 00:01:39.320 LINK verify 00:01:39.320 CXX test/cpp_headers/idxd.o 00:01:39.320 LINK bdev_svc 00:01:39.320 CXX test/cpp_headers/idxd_spec.o 00:01:39.320 CXX test/cpp_headers/init.o 00:01:39.320 LINK spdk_tgt 00:01:39.320 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:39.320 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:39.320 CXX test/cpp_headers/ioat.o 00:01:39.320 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:39.320 CXX test/cpp_headers/ioat_spec.o 00:01:39.320 LINK spdk_dd 00:01:39.320 CXX test/cpp_headers/iscsi_spec.o 00:01:39.579 CXX test/cpp_headers/json.o 00:01:39.579 CXX test/cpp_headers/jsonrpc.o 00:01:39.579 CXX test/cpp_headers/keyring.o 00:01:39.579 LINK spdk_trace 00:01:39.579 CXX test/cpp_headers/keyring_module.o 00:01:39.579 CXX test/cpp_headers/likely.o 00:01:39.579 CXX test/cpp_headers/log.o 00:01:39.579 CXX test/cpp_headers/lvol.o 00:01:39.579 CXX test/cpp_headers/memory.o 00:01:39.579 CXX test/cpp_headers/mmio.o 00:01:39.579 CXX test/cpp_headers/nbd.o 00:01:39.579 LINK pci_ut 00:01:39.579 CXX test/cpp_headers/notify.o 00:01:39.579 CXX test/cpp_headers/nvme.o 00:01:39.579 CXX test/cpp_headers/nvme_intel.o 00:01:39.579 CXX test/cpp_headers/nvme_ocssd.o 00:01:39.579 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:39.579 CXX test/cpp_headers/nvme_spec.o 00:01:39.579 CXX test/cpp_headers/nvme_zns.o 00:01:39.579 CXX test/cpp_headers/nvmf_cmd.o 00:01:39.579 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:39.579 CXX test/cpp_headers/nvmf.o 00:01:39.579 CXX test/cpp_headers/nvmf_spec.o 00:01:39.579 CXX test/cpp_headers/nvmf_transport.o 00:01:39.579 LINK test_dma 00:01:39.579 CXX test/cpp_headers/opal.o 00:01:39.579 CXX test/cpp_headers/opal_spec.o 00:01:39.579 CXX test/cpp_headers/pci_ids.o 00:01:39.579 CXX test/cpp_headers/pipe.o 00:01:39.839 CXX test/cpp_headers/queue.o 00:01:39.839 CXX test/cpp_headers/reduce.o 00:01:39.839 LINK nvme_fuzz 00:01:39.839 CXX test/cpp_headers/rpc.o 00:01:39.839 CC examples/thread/thread/thread_ex.o 00:01:39.839 CC examples/sock/hello_world/hello_sock.o 00:01:39.839 CC test/event/event_perf/event_perf.o 00:01:39.839 CC examples/vmd/lsvmd/lsvmd.o 00:01:39.839 CC examples/idxd/perf/perf.o 00:01:39.839 CC test/event/reactor/reactor.o 00:01:39.839 LINK spdk_nvme 00:01:39.839 CXX test/cpp_headers/scheduler.o 00:01:39.839 LINK spdk_bdev 00:01:39.839 CXX test/cpp_headers/scsi.o 00:01:39.839 CXX test/cpp_headers/scsi_spec.o 00:01:39.839 CXX test/cpp_headers/sock.o 00:01:40.103 CC test/event/reactor_perf/reactor_perf.o 00:01:40.103 CC examples/vmd/led/led.o 00:01:40.103 CXX test/cpp_headers/stdinc.o 00:01:40.103 CXX test/cpp_headers/string.o 00:01:40.103 CXX test/cpp_headers/thread.o 00:01:40.103 CC test/event/app_repeat/app_repeat.o 00:01:40.103 CXX test/cpp_headers/trace.o 00:01:40.103 CXX test/cpp_headers/trace_parser.o 00:01:40.103 CXX test/cpp_headers/tree.o 00:01:40.103 CXX test/cpp_headers/ublk.o 00:01:40.103 CXX test/cpp_headers/util.o 00:01:40.103 CXX test/cpp_headers/uuid.o 00:01:40.103 CC test/event/scheduler/scheduler.o 00:01:40.103 CC app/vhost/vhost.o 00:01:40.103 CXX test/cpp_headers/version.o 00:01:40.103 CXX test/cpp_headers/vfio_user_pci.o 00:01:40.103 CXX test/cpp_headers/vfio_user_spec.o 00:01:40.103 CXX test/cpp_headers/vhost.o 00:01:40.103 CXX test/cpp_headers/vmd.o 00:01:40.103 CXX test/cpp_headers/xor.o 00:01:40.103 CXX test/cpp_headers/zipf.o 00:01:40.103 LINK mem_callbacks 00:01:40.103 LINK lsvmd 00:01:40.103 LINK event_perf 00:01:40.103 LINK reactor 00:01:40.103 LINK spdk_nvme_perf 00:01:40.367 LINK vhost_fuzz 00:01:40.367 LINK reactor_perf 00:01:40.367 LINK spdk_top 00:01:40.367 LINK led 00:01:40.367 LINK app_repeat 00:01:40.367 LINK spdk_nvme_identify 00:01:40.367 LINK thread 00:01:40.367 LINK hello_sock 00:01:40.367 CC test/nvme/aer/aer.o 00:01:40.367 CC test/nvme/err_injection/err_injection.o 00:01:40.367 CC test/nvme/sgl/sgl.o 00:01:40.367 CC test/nvme/startup/startup.o 00:01:40.367 CC test/nvme/e2edp/nvme_dp.o 00:01:40.367 CC test/nvme/reset/reset.o 00:01:40.367 CC test/nvme/reserve/reserve.o 00:01:40.367 CC test/nvme/simple_copy/simple_copy.o 00:01:40.367 CC test/nvme/overhead/overhead.o 00:01:40.367 CC test/nvme/boot_partition/boot_partition.o 00:01:40.367 CC test/blobfs/mkfs/mkfs.o 00:01:40.367 CC test/accel/dif/dif.o 00:01:40.367 CC test/nvme/compliance/nvme_compliance.o 00:01:40.367 CC test/nvme/connect_stress/connect_stress.o 00:01:40.367 CC test/lvol/esnap/esnap.o 00:01:40.367 LINK vhost 00:01:40.367 CC test/nvme/fused_ordering/fused_ordering.o 00:01:40.367 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:40.625 CC test/nvme/fdp/fdp.o 00:01:40.625 CC test/nvme/cuse/cuse.o 00:01:40.625 LINK scheduler 00:01:40.625 LINK idxd_perf 00:01:40.625 LINK err_injection 00:01:40.625 LINK boot_partition 00:01:40.625 LINK reserve 00:01:40.625 LINK startup 00:01:40.625 LINK doorbell_aers 00:01:40.625 LINK mkfs 00:01:40.882 LINK sgl 00:01:40.882 LINK nvme_dp 00:01:40.882 LINK connect_stress 00:01:40.882 CC examples/nvme/abort/abort.o 00:01:40.882 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:40.882 CC examples/nvme/reconnect/reconnect.o 00:01:40.882 CC examples/nvme/hotplug/hotplug.o 00:01:40.882 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:40.882 CC examples/nvme/arbitration/arbitration.o 00:01:40.882 CC examples/nvme/hello_world/hello_world.o 00:01:40.882 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:40.882 LINK overhead 00:01:40.882 LINK simple_copy 00:01:40.882 LINK memory_ut 00:01:40.882 LINK aer 00:01:40.882 LINK fused_ordering 00:01:40.882 LINK reset 00:01:40.882 LINK nvme_compliance 00:01:40.882 LINK fdp 00:01:40.882 CC examples/accel/perf/accel_perf.o 00:01:40.882 CC examples/blob/cli/blobcli.o 00:01:41.139 CC examples/blob/hello_world/hello_blob.o 00:01:41.139 LINK cmb_copy 00:01:41.139 LINK pmr_persistence 00:01:41.139 LINK hello_world 00:01:41.139 LINK dif 00:01:41.139 LINK hotplug 00:01:41.139 LINK arbitration 00:01:41.139 LINK reconnect 00:01:41.139 LINK abort 00:01:41.396 LINK hello_blob 00:01:41.396 LINK nvme_manage 00:01:41.396 CC test/bdev/bdevio/bdevio.o 00:01:41.654 LINK blobcli 00:01:41.654 LINK accel_perf 00:01:41.912 LINK iscsi_fuzz 00:01:41.912 CC examples/bdev/hello_world/hello_bdev.o 00:01:41.912 LINK bdevio 00:01:41.912 CC examples/bdev/bdevperf/bdevperf.o 00:01:42.170 LINK hello_bdev 00:01:42.170 LINK cuse 00:01:42.736 LINK bdevperf 00:01:42.994 CC examples/nvmf/nvmf/nvmf.o 00:01:43.561 LINK nvmf 00:01:46.098 LINK esnap 00:01:46.099 00:01:46.099 real 0m49.073s 00:01:46.099 user 10m6.138s 00:01:46.099 sys 2m28.750s 00:01:46.099 11:06:12 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:46.099 11:06:12 make -- common/autotest_common.sh@10 -- $ set +x 00:01:46.099 ************************************ 00:01:46.099 END TEST make 00:01:46.099 ************************************ 00:01:46.099 11:06:12 -- common/autotest_common.sh@1142 -- $ return 0 00:01:46.099 11:06:12 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:46.099 11:06:12 -- pm/common@29 -- $ signal_monitor_resources TERM 00:01:46.099 11:06:12 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:01:46.099 11:06:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:46.099 11:06:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:46.099 11:06:12 -- pm/common@44 -- $ pid=363522 00:01:46.099 11:06:12 -- pm/common@50 -- $ kill -TERM 363522 00:01:46.099 11:06:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:46.099 11:06:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:46.099 11:06:12 -- pm/common@44 -- $ pid=363524 00:01:46.099 11:06:12 -- pm/common@50 -- $ kill -TERM 363524 00:01:46.099 11:06:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:46.099 11:06:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:46.099 11:06:12 -- pm/common@44 -- $ pid=363526 00:01:46.099 11:06:12 -- pm/common@50 -- $ kill -TERM 363526 00:01:46.099 11:06:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:46.099 11:06:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:46.099 11:06:12 -- pm/common@44 -- $ pid=363552 00:01:46.099 11:06:12 -- pm/common@50 -- $ sudo -E kill -TERM 363552 00:01:46.099 11:06:12 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:46.099 11:06:12 -- nvmf/common.sh@7 -- # uname -s 00:01:46.099 11:06:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:46.099 11:06:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:46.099 11:06:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:46.099 11:06:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:46.099 11:06:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:46.099 11:06:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:46.099 11:06:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:46.099 11:06:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:46.099 11:06:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:46.099 11:06:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:46.099 11:06:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:01:46.099 11:06:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:01:46.099 11:06:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:46.099 11:06:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:46.099 11:06:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:46.099 11:06:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:46.099 11:06:12 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:46.099 11:06:12 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:46.099 11:06:12 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:46.099 11:06:12 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:46.099 11:06:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:46.099 11:06:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:46.099 11:06:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:46.099 11:06:12 -- paths/export.sh@5 -- # export PATH 00:01:46.099 11:06:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:46.099 11:06:12 -- nvmf/common.sh@47 -- # : 0 00:01:46.099 11:06:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:46.099 11:06:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:46.099 11:06:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:46.099 11:06:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:46.099 11:06:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:46.099 11:06:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:46.099 11:06:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:46.099 11:06:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:46.099 11:06:12 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:46.099 11:06:12 -- spdk/autotest.sh@32 -- # uname -s 00:01:46.099 11:06:12 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:46.099 11:06:12 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:46.099 11:06:12 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:46.099 11:06:12 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:46.099 11:06:12 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:46.099 11:06:12 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:46.099 11:06:12 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:46.099 11:06:12 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:46.099 11:06:12 -- spdk/autotest.sh@48 -- # udevadm_pid=419603 00:01:46.099 11:06:12 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:46.099 11:06:12 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:46.099 11:06:12 -- pm/common@17 -- # local monitor 00:01:46.099 11:06:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:46.099 11:06:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:46.099 11:06:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:46.099 11:06:12 -- pm/common@21 -- # date +%s 00:01:46.099 11:06:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:46.099 11:06:12 -- pm/common@21 -- # date +%s 00:01:46.099 11:06:12 -- pm/common@25 -- # sleep 1 00:01:46.099 11:06:12 -- pm/common@21 -- # date +%s 00:01:46.099 11:06:12 -- pm/common@21 -- # date +%s 00:01:46.099 11:06:12 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720775172 00:01:46.099 11:06:12 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720775172 00:01:46.099 11:06:12 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720775172 00:01:46.099 11:06:12 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720775172 00:01:46.099 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720775172_collect-vmstat.pm.log 00:01:46.099 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720775172_collect-cpu-load.pm.log 00:01:46.099 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720775172_collect-cpu-temp.pm.log 00:01:46.099 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720775172_collect-bmc-pm.bmc.pm.log 00:01:47.031 11:06:13 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:47.031 11:06:13 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:47.031 11:06:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:01:47.031 11:06:13 -- common/autotest_common.sh@10 -- # set +x 00:01:47.031 11:06:13 -- spdk/autotest.sh@59 -- # create_test_list 00:01:47.031 11:06:13 -- common/autotest_common.sh@746 -- # xtrace_disable 00:01:47.031 11:06:13 -- common/autotest_common.sh@10 -- # set +x 00:01:47.291 11:06:13 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:47.291 11:06:13 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:47.291 11:06:13 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:47.291 11:06:13 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:47.291 11:06:13 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:47.291 11:06:13 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:47.291 11:06:13 -- common/autotest_common.sh@1455 -- # uname 00:01:47.291 11:06:13 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:01:47.291 11:06:13 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:47.291 11:06:13 -- common/autotest_common.sh@1475 -- # uname 00:01:47.291 11:06:13 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:01:47.291 11:06:13 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:47.291 11:06:13 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:47.291 11:06:13 -- spdk/autotest.sh@72 -- # hash lcov 00:01:47.291 11:06:13 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:47.291 11:06:13 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:47.291 --rc lcov_branch_coverage=1 00:01:47.291 --rc lcov_function_coverage=1 00:01:47.291 --rc genhtml_branch_coverage=1 00:01:47.291 --rc genhtml_function_coverage=1 00:01:47.291 --rc genhtml_legend=1 00:01:47.291 --rc geninfo_all_blocks=1 00:01:47.291 ' 00:01:47.291 11:06:13 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:47.291 --rc lcov_branch_coverage=1 00:01:47.291 --rc lcov_function_coverage=1 00:01:47.291 --rc genhtml_branch_coverage=1 00:01:47.291 --rc genhtml_function_coverage=1 00:01:47.291 --rc genhtml_legend=1 00:01:47.291 --rc geninfo_all_blocks=1 00:01:47.291 ' 00:01:47.291 11:06:13 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:47.291 --rc lcov_branch_coverage=1 00:01:47.291 --rc lcov_function_coverage=1 00:01:47.291 --rc genhtml_branch_coverage=1 00:01:47.291 --rc genhtml_function_coverage=1 00:01:47.291 --rc genhtml_legend=1 00:01:47.291 --rc geninfo_all_blocks=1 00:01:47.291 --no-external' 00:01:47.291 11:06:13 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:47.291 --rc lcov_branch_coverage=1 00:01:47.291 --rc lcov_function_coverage=1 00:01:47.291 --rc genhtml_branch_coverage=1 00:01:47.291 --rc genhtml_function_coverage=1 00:01:47.291 --rc genhtml_legend=1 00:01:47.291 --rc geninfo_all_blocks=1 00:01:47.291 --no-external' 00:01:47.291 11:06:13 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:47.291 lcov: LCOV version 1.14 00:01:47.291 11:06:13 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:02.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:02.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:17.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:17.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:17.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:17.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:17.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:17.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:17.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:17.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:17.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:17.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:17.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:17.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:17.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:17.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:17.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:17.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:17.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:17.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:17.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:17.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:17.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:17.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:17.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:17.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:17.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:17.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:17.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:17.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:17.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:17.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:17.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:17.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:17.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:17.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:17.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:17.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:17.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:17.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:17.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:17.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:17.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:17.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:17.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:17.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:17.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:17.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:17.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:17.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:17.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:17.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:17.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:17.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:17.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:17.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:17.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:17.196 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:17.196 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:17.196 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:17.196 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:17.196 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:17.196 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:17.196 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:17.196 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:17.196 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:17.196 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:17.196 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:17.196 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:17.196 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:20.487 11:06:46 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:20.487 11:06:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:20.487 11:06:46 -- common/autotest_common.sh@10 -- # set +x 00:02:20.487 11:06:46 -- spdk/autotest.sh@91 -- # rm -f 00:02:20.487 11:06:46 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:21.424 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:02:21.424 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:02:21.424 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:02:21.424 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:02:21.424 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:02:21.424 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:02:21.424 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:02:21.424 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:02:21.424 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:02:21.424 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:02:21.424 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:02:21.424 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:02:21.424 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:02:21.424 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:02:21.424 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:02:21.424 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:02:21.424 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:02:21.685 11:06:47 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:21.685 11:06:47 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:21.685 11:06:47 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:21.685 11:06:47 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:21.685 11:06:47 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:21.685 11:06:47 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:21.685 11:06:47 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:21.685 11:06:47 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:21.685 11:06:47 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:21.685 11:06:47 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:21.685 11:06:47 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:21.685 11:06:47 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:21.685 11:06:47 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:21.685 11:06:47 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:21.685 11:06:47 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:21.685 No valid GPT data, bailing 00:02:21.685 11:06:47 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:21.685 11:06:47 -- scripts/common.sh@391 -- # pt= 00:02:21.685 11:06:47 -- scripts/common.sh@392 -- # return 1 00:02:21.686 11:06:47 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:21.686 1+0 records in 00:02:21.686 1+0 records out 00:02:21.686 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00180686 s, 580 MB/s 00:02:21.686 11:06:47 -- spdk/autotest.sh@118 -- # sync 00:02:21.686 11:06:47 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:21.686 11:06:47 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:21.686 11:06:47 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:23.590 11:06:49 -- spdk/autotest.sh@124 -- # uname -s 00:02:23.590 11:06:49 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:23.590 11:06:49 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:23.590 11:06:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:23.590 11:06:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:23.590 11:06:49 -- common/autotest_common.sh@10 -- # set +x 00:02:23.590 ************************************ 00:02:23.590 START TEST setup.sh 00:02:23.590 ************************************ 00:02:23.590 11:06:49 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:23.590 * Looking for test storage... 00:02:23.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:23.590 11:06:49 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:23.590 11:06:49 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:23.590 11:06:49 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:23.590 11:06:49 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:23.590 11:06:49 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:23.590 11:06:49 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:23.591 ************************************ 00:02:23.591 START TEST acl 00:02:23.591 ************************************ 00:02:23.591 11:06:49 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:23.591 * Looking for test storage... 00:02:23.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:23.591 11:06:49 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:23.591 11:06:49 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:23.591 11:06:49 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:23.591 11:06:49 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:23.591 11:06:49 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:23.591 11:06:49 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:23.591 11:06:49 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:23.591 11:06:49 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:23.591 11:06:49 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:23.591 11:06:49 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:23.591 11:06:49 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:23.591 11:06:49 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:23.591 11:06:49 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:23.591 11:06:49 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:23.591 11:06:49 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:23.591 11:06:49 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:25.496 11:06:51 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:25.496 11:06:51 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:25.496 11:06:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.496 11:06:51 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:25.496 11:06:51 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:25.496 11:06:51 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:26.431 Hugepages 00:02:26.431 node hugesize free / total 00:02:26.431 11:06:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:26.431 11:06:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:26.431 11:06:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.431 11:06:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:26.431 11:06:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:26.431 11:06:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.431 11:06:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:26.431 11:06:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:26.431 11:06:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.431 00:02:26.431 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:26.431 11:06:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:26.431 11:06:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:26.431 11:06:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.431 11:06:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:26.431 11:06:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.431 11:06:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.431 11:06:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.431 11:06:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:26.431 11:06:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.431 11:06:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:26.432 11:06:52 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:26.432 11:06:52 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:26.432 11:06:52 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:26.432 11:06:52 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:26.432 ************************************ 00:02:26.432 START TEST denied 00:02:26.432 ************************************ 00:02:26.432 11:06:52 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:02:26.432 11:06:52 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:02:26.432 11:06:52 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:26.432 11:06:52 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:02:26.432 11:06:52 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:26.432 11:06:52 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:28.338 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:02:28.338 11:06:53 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:02:28.338 11:06:53 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:28.338 11:06:53 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:28.338 11:06:53 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:02:28.338 11:06:53 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:02:28.338 11:06:53 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:28.338 11:06:53 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:28.338 11:06:53 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:28.338 11:06:53 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:28.338 11:06:53 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:30.875 00:02:30.875 real 0m4.026s 00:02:30.875 user 0m1.120s 00:02:30.875 sys 0m1.926s 00:02:30.875 11:06:56 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:30.875 11:06:56 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:30.875 ************************************ 00:02:30.875 END TEST denied 00:02:30.875 ************************************ 00:02:30.875 11:06:56 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:30.875 11:06:56 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:30.875 11:06:56 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:30.875 11:06:56 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:30.875 11:06:56 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:30.875 ************************************ 00:02:30.875 START TEST allowed 00:02:30.875 ************************************ 00:02:30.875 11:06:56 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:02:30.875 11:06:56 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:02:30.875 11:06:56 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:30.875 11:06:56 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:02:30.875 11:06:56 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:30.875 11:06:56 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:33.405 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:33.405 11:06:58 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:33.405 11:06:58 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:33.405 11:06:58 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:33.405 11:06:58 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:33.405 11:06:58 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:34.780 00:02:34.780 real 0m4.012s 00:02:34.780 user 0m1.036s 00:02:34.780 sys 0m1.804s 00:02:34.780 11:07:00 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:34.780 11:07:00 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:02:34.780 ************************************ 00:02:34.780 END TEST allowed 00:02:34.780 ************************************ 00:02:34.780 11:07:00 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:34.780 00:02:34.780 real 0m10.971s 00:02:34.780 user 0m3.320s 00:02:34.780 sys 0m5.565s 00:02:34.780 11:07:00 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:34.780 11:07:00 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:34.780 ************************************ 00:02:34.780 END TEST acl 00:02:34.780 ************************************ 00:02:34.780 11:07:00 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:02:34.780 11:07:00 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:34.780 11:07:00 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:34.780 11:07:00 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:34.780 11:07:00 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:34.780 ************************************ 00:02:34.780 START TEST hugepages 00:02:34.780 ************************************ 00:02:34.780 11:07:00 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:34.780 * Looking for test storage... 00:02:34.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:34.780 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:34.780 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:34.780 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:34.780 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:34.780 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:34.780 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:34.780 11:07:00 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:34.780 11:07:00 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:02:34.780 11:07:00 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:02:34.780 11:07:00 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:02:34.780 11:07:00 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.780 11:07:00 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:34.780 11:07:00 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:34.780 11:07:00 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.780 11:07:00 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.780 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.780 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.780 11:07:00 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44921624 kB' 'MemAvailable: 48375892 kB' 'Buffers: 2704 kB' 'Cached: 9125552 kB' 'SwapCached: 0 kB' 'Active: 6166512 kB' 'Inactive: 3481172 kB' 'Active(anon): 5781296 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522692 kB' 'Mapped: 203320 kB' 'Shmem: 5261868 kB' 'KReclaimable: 163008 kB' 'Slab: 482948 kB' 'SReclaimable: 163008 kB' 'SUnreclaim: 319940 kB' 'KernelStack: 12768 kB' 'PageTables: 8352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562304 kB' 'Committed_AS: 6951248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195760 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1334876 kB' 'DirectMap2M: 11167744 kB' 'DirectMap1G: 56623104 kB' 00:02:34.780 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.780 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.780 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.781 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:34.782 11:07:00 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:34.782 11:07:00 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:34.782 11:07:00 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:34.782 11:07:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:34.782 ************************************ 00:02:34.782 START TEST default_setup 00:02:34.782 ************************************ 00:02:34.782 11:07:00 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:02:34.782 11:07:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:34.782 11:07:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:02:34.782 11:07:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:34.782 11:07:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:02:34.782 11:07:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:34.782 11:07:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:02:34.782 11:07:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:34.782 11:07:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:34.782 11:07:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:34.782 11:07:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:34.782 11:07:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:02:34.782 11:07:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:34.782 11:07:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:34.782 11:07:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:34.782 11:07:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:34.782 11:07:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:34.782 11:07:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:34.782 11:07:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:34.782 11:07:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:02:34.782 11:07:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:02:34.782 11:07:00 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:02:34.782 11:07:00 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:36.176 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:36.176 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:36.176 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:36.176 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:36.176 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:36.176 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:36.176 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:36.176 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:36.176 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:36.176 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:36.176 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:36.176 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:36.176 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:36.176 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:36.176 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:36.176 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:37.116 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 47007508 kB' 'MemAvailable: 50461688 kB' 'Buffers: 2704 kB' 'Cached: 9125640 kB' 'SwapCached: 0 kB' 'Active: 6183804 kB' 'Inactive: 3481172 kB' 'Active(anon): 5798588 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539960 kB' 'Mapped: 203428 kB' 'Shmem: 5261956 kB' 'KReclaimable: 162832 kB' 'Slab: 482508 kB' 'SReclaimable: 162832 kB' 'SUnreclaim: 319676 kB' 'KernelStack: 12592 kB' 'PageTables: 7912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 6971876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195840 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1334876 kB' 'DirectMap2M: 11167744 kB' 'DirectMap1G: 56623104 kB' 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.116 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 47007580 kB' 'MemAvailable: 50461760 kB' 'Buffers: 2704 kB' 'Cached: 9125644 kB' 'SwapCached: 0 kB' 'Active: 6184160 kB' 'Inactive: 3481172 kB' 'Active(anon): 5798944 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540296 kB' 'Mapped: 203416 kB' 'Shmem: 5261960 kB' 'KReclaimable: 162832 kB' 'Slab: 482500 kB' 'SReclaimable: 162832 kB' 'SUnreclaim: 319668 kB' 'KernelStack: 12640 kB' 'PageTables: 7992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 6971896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195808 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1334876 kB' 'DirectMap2M: 11167744 kB' 'DirectMap1G: 56623104 kB' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.117 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 47006824 kB' 'MemAvailable: 50461004 kB' 'Buffers: 2704 kB' 'Cached: 9125656 kB' 'SwapCached: 0 kB' 'Active: 6183976 kB' 'Inactive: 3481172 kB' 'Active(anon): 5798760 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540112 kB' 'Mapped: 203416 kB' 'Shmem: 5261972 kB' 'KReclaimable: 162832 kB' 'Slab: 482500 kB' 'SReclaimable: 162832 kB' 'SUnreclaim: 319668 kB' 'KernelStack: 12640 kB' 'PageTables: 7988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 6971916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195808 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1334876 kB' 'DirectMap2M: 11167744 kB' 'DirectMap1G: 56623104 kB' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.118 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:37.119 nr_hugepages=1024 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:37.119 resv_hugepages=0 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:37.119 surplus_hugepages=0 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:37.119 anon_hugepages=0 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.119 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.120 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 47007160 kB' 'MemAvailable: 50461340 kB' 'Buffers: 2704 kB' 'Cached: 9125684 kB' 'SwapCached: 0 kB' 'Active: 6184136 kB' 'Inactive: 3481172 kB' 'Active(anon): 5798920 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540180 kB' 'Mapped: 203340 kB' 'Shmem: 5262000 kB' 'KReclaimable: 162832 kB' 'Slab: 482500 kB' 'SReclaimable: 162832 kB' 'SUnreclaim: 319668 kB' 'KernelStack: 12656 kB' 'PageTables: 8036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 6971940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195808 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1334876 kB' 'DirectMap2M: 11167744 kB' 'DirectMap1G: 56623104 kB' 00:02:37.379 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.379 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.379 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.379 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.379 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.379 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.379 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.379 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.379 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.379 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.379 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.379 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.379 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.379 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.379 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.379 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.379 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.379 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.379 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.380 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.381 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27863400 kB' 'MemUsed: 4966484 kB' 'SwapCached: 0 kB' 'Active: 1662144 kB' 'Inactive: 99740 kB' 'Active(anon): 1556144 kB' 'Inactive(anon): 0 kB' 'Active(file): 106000 kB' 'Inactive(file): 99740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1499400 kB' 'Mapped: 82988 kB' 'AnonPages: 265672 kB' 'Shmem: 1293660 kB' 'KernelStack: 7016 kB' 'PageTables: 4760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70048 kB' 'Slab: 258752 kB' 'SReclaimable: 70048 kB' 'SUnreclaim: 188704 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.382 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:37.383 node0=1024 expecting 1024 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:37.383 00:02:37.383 real 0m2.481s 00:02:37.383 user 0m0.647s 00:02:37.383 sys 0m0.947s 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:37.383 11:07:03 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:02:37.383 ************************************ 00:02:37.383 END TEST default_setup 00:02:37.383 ************************************ 00:02:37.383 11:07:03 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:37.383 11:07:03 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:37.383 11:07:03 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:37.383 11:07:03 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:37.383 11:07:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:37.383 ************************************ 00:02:37.383 START TEST per_node_1G_alloc 00:02:37.383 ************************************ 00:02:37.383 11:07:03 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:02:37.383 11:07:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:02:37.383 11:07:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:37.383 11:07:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:37.383 11:07:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:37.383 11:07:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:02:37.383 11:07:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:37.383 11:07:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:37.383 11:07:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:37.383 11:07:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:37.383 11:07:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:37.383 11:07:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:37.383 11:07:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:37.383 11:07:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:37.383 11:07:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:37.383 11:07:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:37.383 11:07:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:37.383 11:07:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:37.383 11:07:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:37.383 11:07:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:37.383 11:07:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:37.383 11:07:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:37.383 11:07:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:37.383 11:07:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:37.383 11:07:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:37.383 11:07:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:02:37.383 11:07:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:37.383 11:07:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:38.769 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:38.769 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:38.769 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:38.769 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:38.769 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:38.769 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:38.769 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:38.769 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:38.769 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:38.769 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:38.769 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:38.769 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:38.769 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:38.769 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:38.769 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:38.769 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:38.769 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46994836 kB' 'MemAvailable: 50449016 kB' 'Buffers: 2704 kB' 'Cached: 9125760 kB' 'SwapCached: 0 kB' 'Active: 6184488 kB' 'Inactive: 3481172 kB' 'Active(anon): 5799272 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540936 kB' 'Mapped: 203636 kB' 'Shmem: 5262076 kB' 'KReclaimable: 162832 kB' 'Slab: 482336 kB' 'SReclaimable: 162832 kB' 'SUnreclaim: 319504 kB' 'KernelStack: 12656 kB' 'PageTables: 8016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 6972288 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195872 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1334876 kB' 'DirectMap2M: 11167744 kB' 'DirectMap1G: 56623104 kB' 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.769 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46995520 kB' 'MemAvailable: 50449700 kB' 'Buffers: 2704 kB' 'Cached: 9125768 kB' 'SwapCached: 0 kB' 'Active: 6184324 kB' 'Inactive: 3481172 kB' 'Active(anon): 5799108 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540312 kB' 'Mapped: 203436 kB' 'Shmem: 5262084 kB' 'KReclaimable: 162832 kB' 'Slab: 482384 kB' 'SReclaimable: 162832 kB' 'SUnreclaim: 319552 kB' 'KernelStack: 12656 kB' 'PageTables: 8004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 6972308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195872 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1334876 kB' 'DirectMap2M: 11167744 kB' 'DirectMap1G: 56623104 kB' 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.770 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.771 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46996132 kB' 'MemAvailable: 50450312 kB' 'Buffers: 2704 kB' 'Cached: 9125768 kB' 'SwapCached: 0 kB' 'Active: 6184316 kB' 'Inactive: 3481172 kB' 'Active(anon): 5799100 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540264 kB' 'Mapped: 203356 kB' 'Shmem: 5262084 kB' 'KReclaimable: 162832 kB' 'Slab: 482360 kB' 'SReclaimable: 162832 kB' 'SUnreclaim: 319528 kB' 'KernelStack: 12640 kB' 'PageTables: 7944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 6972328 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195888 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1334876 kB' 'DirectMap2M: 11167744 kB' 'DirectMap1G: 56623104 kB' 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.772 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.773 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.773 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.773 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.773 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.773 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.773 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.773 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.773 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.773 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.773 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.773 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.773 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.773 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.773 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.773 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.773 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.773 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.773 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.773 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.773 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.773 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.773 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.773 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.773 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.773 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.773 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.773 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.773 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.773 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.773 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.773 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.773 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.773 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.773 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.773 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.774 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:38.775 nr_hugepages=1024 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:38.775 resv_hugepages=0 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:38.775 surplus_hugepages=0 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:38.775 anon_hugepages=0 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46996132 kB' 'MemAvailable: 50450312 kB' 'Buffers: 2704 kB' 'Cached: 9125768 kB' 'SwapCached: 0 kB' 'Active: 6184420 kB' 'Inactive: 3481172 kB' 'Active(anon): 5799204 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540368 kB' 'Mapped: 203356 kB' 'Shmem: 5262084 kB' 'KReclaimable: 162832 kB' 'Slab: 482360 kB' 'SReclaimable: 162832 kB' 'SUnreclaim: 319528 kB' 'KernelStack: 12624 kB' 'PageTables: 7892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 6972352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195888 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1334876 kB' 'DirectMap2M: 11167744 kB' 'DirectMap1G: 56623104 kB' 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.775 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.776 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 28913456 kB' 'MemUsed: 3916428 kB' 'SwapCached: 0 kB' 'Active: 1661972 kB' 'Inactive: 99740 kB' 'Active(anon): 1555972 kB' 'Inactive(anon): 0 kB' 'Active(file): 106000 kB' 'Inactive(file): 99740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1499536 kB' 'Mapped: 82992 kB' 'AnonPages: 265352 kB' 'Shmem: 1293796 kB' 'KernelStack: 7016 kB' 'PageTables: 4776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70048 kB' 'Slab: 258644 kB' 'SReclaimable: 70048 kB' 'SUnreclaim: 188596 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.777 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 18082676 kB' 'MemUsed: 9629148 kB' 'SwapCached: 0 kB' 'Active: 4522488 kB' 'Inactive: 3381432 kB' 'Active(anon): 4243272 kB' 'Inactive(anon): 0 kB' 'Active(file): 279216 kB' 'Inactive(file): 3381432 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7629000 kB' 'Mapped: 120364 kB' 'AnonPages: 275028 kB' 'Shmem: 3968352 kB' 'KernelStack: 5656 kB' 'PageTables: 3272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 92784 kB' 'Slab: 223716 kB' 'SReclaimable: 92784 kB' 'SUnreclaim: 130932 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.778 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.779 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.780 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.780 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.780 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.780 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.780 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.780 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.780 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.780 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.780 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.780 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.780 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.780 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:38.780 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:38.780 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:38.780 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.780 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:38.780 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:38.780 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:38.780 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:38.780 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:38.780 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:38.780 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:38.780 node0=512 expecting 512 00:02:38.780 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:38.780 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:38.780 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:38.780 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:38.780 node1=512 expecting 512 00:02:38.780 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:38.780 00:02:38.780 real 0m1.544s 00:02:38.780 user 0m0.634s 00:02:38.780 sys 0m0.876s 00:02:38.780 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:38.780 11:07:04 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:38.780 ************************************ 00:02:38.780 END TEST per_node_1G_alloc 00:02:38.780 ************************************ 00:02:39.037 11:07:04 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:39.037 11:07:04 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:39.037 11:07:04 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:39.037 11:07:04 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:39.037 11:07:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:39.037 ************************************ 00:02:39.037 START TEST even_2G_alloc 00:02:39.037 ************************************ 00:02:39.037 11:07:04 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:02:39.037 11:07:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:39.037 11:07:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:39.037 11:07:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:39.037 11:07:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:39.037 11:07:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:39.037 11:07:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:39.037 11:07:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:39.037 11:07:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:39.037 11:07:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:39.037 11:07:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:39.037 11:07:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:39.037 11:07:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:39.037 11:07:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:39.037 11:07:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:39.037 11:07:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:39.037 11:07:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:39.037 11:07:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:02:39.037 11:07:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:39.037 11:07:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:39.037 11:07:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:39.037 11:07:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:39.037 11:07:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:39.037 11:07:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:39.037 11:07:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:39.037 11:07:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:39.037 11:07:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:02:39.037 11:07:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:39.037 11:07:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:39.969 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:39.969 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:39.969 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:39.969 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:39.969 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:39.969 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:39.969 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:39.969 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:39.969 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:39.969 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:40.231 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:40.231 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:40.231 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:40.231 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:40.231 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:40.231 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:40.231 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:40.231 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:40.231 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:40.231 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:40.231 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:40.231 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:40.231 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:40.231 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:40.231 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:40.231 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:40.231 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:40.231 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:40.231 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:40.231 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.231 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.231 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:40.231 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:40.231 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.231 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.231 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.231 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.231 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 47007800 kB' 'MemAvailable: 50461980 kB' 'Buffers: 2704 kB' 'Cached: 9125896 kB' 'SwapCached: 0 kB' 'Active: 6183476 kB' 'Inactive: 3481172 kB' 'Active(anon): 5798260 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539196 kB' 'Mapped: 203072 kB' 'Shmem: 5262212 kB' 'KReclaimable: 162832 kB' 'Slab: 481920 kB' 'SReclaimable: 162832 kB' 'SUnreclaim: 319088 kB' 'KernelStack: 12672 kB' 'PageTables: 8032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 6958632 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195952 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1334876 kB' 'DirectMap2M: 11167744 kB' 'DirectMap1G: 56623104 kB' 00:02:40.231 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.231 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.231 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.231 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.231 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.232 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 47008456 kB' 'MemAvailable: 50462636 kB' 'Buffers: 2704 kB' 'Cached: 9125900 kB' 'SwapCached: 0 kB' 'Active: 6181752 kB' 'Inactive: 3481172 kB' 'Active(anon): 5796536 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537468 kB' 'Mapped: 202584 kB' 'Shmem: 5262216 kB' 'KReclaimable: 162832 kB' 'Slab: 481888 kB' 'SReclaimable: 162832 kB' 'SUnreclaim: 319056 kB' 'KernelStack: 12608 kB' 'PageTables: 7720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 6958652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195920 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1334876 kB' 'DirectMap2M: 11167744 kB' 'DirectMap1G: 56623104 kB' 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.233 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.234 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 47008204 kB' 'MemAvailable: 50462384 kB' 'Buffers: 2704 kB' 'Cached: 9125916 kB' 'SwapCached: 0 kB' 'Active: 6181008 kB' 'Inactive: 3481172 kB' 'Active(anon): 5795792 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536696 kB' 'Mapped: 202508 kB' 'Shmem: 5262232 kB' 'KReclaimable: 162832 kB' 'Slab: 481880 kB' 'SReclaimable: 162832 kB' 'SUnreclaim: 319048 kB' 'KernelStack: 12608 kB' 'PageTables: 7716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 6958672 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1334876 kB' 'DirectMap2M: 11167744 kB' 'DirectMap1G: 56623104 kB' 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.235 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.236 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.237 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:40.237 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:40.237 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:40.237 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:40.237 nr_hugepages=1024 00:02:40.237 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:40.237 resv_hugepages=0 00:02:40.496 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:40.496 surplus_hugepages=0 00:02:40.496 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:40.496 anon_hugepages=0 00:02:40.496 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:40.496 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:40.496 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:40.496 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:40.496 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:40.496 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:40.496 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.496 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.496 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:40.496 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:40.496 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.496 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.496 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.496 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 47008500 kB' 'MemAvailable: 50462680 kB' 'Buffers: 2704 kB' 'Cached: 9125940 kB' 'SwapCached: 0 kB' 'Active: 6181356 kB' 'Inactive: 3481172 kB' 'Active(anon): 5796140 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537020 kB' 'Mapped: 202508 kB' 'Shmem: 5262256 kB' 'KReclaimable: 162832 kB' 'Slab: 481880 kB' 'SReclaimable: 162832 kB' 'SUnreclaim: 319048 kB' 'KernelStack: 12608 kB' 'PageTables: 7720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 6958696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1334876 kB' 'DirectMap2M: 11167744 kB' 'DirectMap1G: 56623104 kB' 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.497 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 28918340 kB' 'MemUsed: 3911544 kB' 'SwapCached: 0 kB' 'Active: 1660036 kB' 'Inactive: 99740 kB' 'Active(anon): 1554036 kB' 'Inactive(anon): 0 kB' 'Active(file): 106000 kB' 'Inactive(file): 99740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1499588 kB' 'Mapped: 82272 kB' 'AnonPages: 263268 kB' 'Shmem: 1293848 kB' 'KernelStack: 6936 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70048 kB' 'Slab: 258504 kB' 'SReclaimable: 70048 kB' 'SUnreclaim: 188456 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.498 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:40.499 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 18088144 kB' 'MemUsed: 9623680 kB' 'SwapCached: 0 kB' 'Active: 4521392 kB' 'Inactive: 3381432 kB' 'Active(anon): 4242176 kB' 'Inactive(anon): 0 kB' 'Active(file): 279216 kB' 'Inactive(file): 3381432 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7629096 kB' 'Mapped: 120236 kB' 'AnonPages: 273836 kB' 'Shmem: 3968448 kB' 'KernelStack: 5656 kB' 'PageTables: 3244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 92784 kB' 'Slab: 223376 kB' 'SReclaimable: 92784 kB' 'SUnreclaim: 130592 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.500 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:40.501 node0=512 expecting 512 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:40.501 node1=512 expecting 512 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:40.501 00:02:40.501 real 0m1.501s 00:02:40.501 user 0m0.603s 00:02:40.501 sys 0m0.860s 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:40.501 11:07:06 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:40.501 ************************************ 00:02:40.501 END TEST even_2G_alloc 00:02:40.501 ************************************ 00:02:40.501 11:07:06 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:40.501 11:07:06 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:40.501 11:07:06 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:40.501 11:07:06 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:40.501 11:07:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:40.501 ************************************ 00:02:40.501 START TEST odd_alloc 00:02:40.501 ************************************ 00:02:40.501 11:07:06 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:02:40.501 11:07:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:40.501 11:07:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:02:40.501 11:07:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:40.501 11:07:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:40.501 11:07:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:40.501 11:07:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:40.501 11:07:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:40.501 11:07:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:40.501 11:07:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:40.501 11:07:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:40.501 11:07:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:40.501 11:07:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:40.501 11:07:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:40.501 11:07:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:40.501 11:07:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:40.501 11:07:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:40.501 11:07:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:02:40.501 11:07:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:40.501 11:07:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:40.501 11:07:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:40.501 11:07:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:40.501 11:07:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:40.501 11:07:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:40.501 11:07:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:40.501 11:07:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:40.501 11:07:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:02:40.501 11:07:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:40.501 11:07:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:41.897 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:41.897 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:41.897 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:41.897 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:41.897 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:41.897 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:41.897 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:41.897 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:41.897 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:41.897 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:41.897 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:41.897 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:41.897 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:41.897 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:41.897 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:41.897 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:41.897 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:41.897 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:02:41.897 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:02:41.897 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:41.897 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:41.897 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:41.897 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:41.897 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:41.897 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:41.897 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:41.897 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:41.897 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:41.897 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:41.897 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:41.897 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.897 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.897 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.897 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.897 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.897 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.897 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 47005820 kB' 'MemAvailable: 50460000 kB' 'Buffers: 2704 kB' 'Cached: 9126032 kB' 'SwapCached: 0 kB' 'Active: 6181924 kB' 'Inactive: 3481172 kB' 'Active(anon): 5796708 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537596 kB' 'Mapped: 202636 kB' 'Shmem: 5262348 kB' 'KReclaimable: 162832 kB' 'Slab: 482132 kB' 'SReclaimable: 162832 kB' 'SUnreclaim: 319300 kB' 'KernelStack: 12608 kB' 'PageTables: 7788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 6958880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195888 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1334876 kB' 'DirectMap2M: 11167744 kB' 'DirectMap1G: 56623104 kB' 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.898 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 47006340 kB' 'MemAvailable: 50460520 kB' 'Buffers: 2704 kB' 'Cached: 9126036 kB' 'SwapCached: 0 kB' 'Active: 6181588 kB' 'Inactive: 3481172 kB' 'Active(anon): 5796372 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537256 kB' 'Mapped: 202592 kB' 'Shmem: 5262352 kB' 'KReclaimable: 162832 kB' 'Slab: 482124 kB' 'SReclaimable: 162832 kB' 'SUnreclaim: 319292 kB' 'KernelStack: 12576 kB' 'PageTables: 7668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 6958900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195872 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1334876 kB' 'DirectMap2M: 11167744 kB' 'DirectMap1G: 56623104 kB' 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.899 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:41.900 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 47006560 kB' 'MemAvailable: 50460740 kB' 'Buffers: 2704 kB' 'Cached: 9126036 kB' 'SwapCached: 0 kB' 'Active: 6182296 kB' 'Inactive: 3481172 kB' 'Active(anon): 5797080 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537928 kB' 'Mapped: 202948 kB' 'Shmem: 5262352 kB' 'KReclaimable: 162832 kB' 'Slab: 482084 kB' 'SReclaimable: 162832 kB' 'SUnreclaim: 319252 kB' 'KernelStack: 12528 kB' 'PageTables: 7520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 6961068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195888 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1334876 kB' 'DirectMap2M: 11167744 kB' 'DirectMap1G: 56623104 kB' 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.901 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:02:41.902 nr_hugepages=1025 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:41.902 resv_hugepages=0 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:41.902 surplus_hugepages=0 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:41.902 anon_hugepages=0 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:41.902 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 47002780 kB' 'MemAvailable: 50452928 kB' 'Buffers: 2704 kB' 'Cached: 9126072 kB' 'SwapCached: 0 kB' 'Active: 6186552 kB' 'Inactive: 3481172 kB' 'Active(anon): 5801336 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542164 kB' 'Mapped: 202948 kB' 'Shmem: 5262388 kB' 'KReclaimable: 162832 kB' 'Slab: 482084 kB' 'SReclaimable: 162832 kB' 'SUnreclaim: 319252 kB' 'KernelStack: 12576 kB' 'PageTables: 7656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 6965060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195888 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1334876 kB' 'DirectMap2M: 11167744 kB' 'DirectMap1G: 56623104 kB' 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.903 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 28924020 kB' 'MemUsed: 3905864 kB' 'SwapCached: 0 kB' 'Active: 1660424 kB' 'Inactive: 99740 kB' 'Active(anon): 1554424 kB' 'Inactive(anon): 0 kB' 'Active(file): 106000 kB' 'Inactive(file): 99740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1499592 kB' 'Mapped: 83032 kB' 'AnonPages: 263644 kB' 'Shmem: 1293852 kB' 'KernelStack: 6920 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70048 kB' 'Slab: 258492 kB' 'SReclaimable: 70048 kB' 'SUnreclaim: 188444 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.904 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:41.905 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 18074728 kB' 'MemUsed: 9637096 kB' 'SwapCached: 0 kB' 'Active: 4522140 kB' 'Inactive: 3381432 kB' 'Active(anon): 4242924 kB' 'Inactive(anon): 0 kB' 'Active(file): 279216 kB' 'Inactive(file): 3381432 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7629228 kB' 'Mapped: 120400 kB' 'AnonPages: 274496 kB' 'Shmem: 3968580 kB' 'KernelStack: 5640 kB' 'PageTables: 3200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 92784 kB' 'Slab: 223592 kB' 'SReclaimable: 92784 kB' 'SUnreclaim: 130808 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.906 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:02:41.907 node0=512 expecting 513 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:02:41.907 node1=513 expecting 512 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:02:41.907 00:02:41.907 real 0m1.514s 00:02:41.907 user 0m0.616s 00:02:41.907 sys 0m0.863s 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:41.907 11:07:07 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:41.907 ************************************ 00:02:41.907 END TEST odd_alloc 00:02:41.907 ************************************ 00:02:41.907 11:07:08 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:41.907 11:07:08 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:02:41.907 11:07:08 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:41.907 11:07:08 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:41.907 11:07:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:42.166 ************************************ 00:02:42.166 START TEST custom_alloc 00:02:42.166 ************************************ 00:02:42.166 11:07:08 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:02:42.166 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:02:42.166 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:02:42.166 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:02:42.166 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:02:42.166 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:02:42.166 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:02:42.166 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:42.167 11:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:43.104 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:43.104 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:43.104 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:43.104 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:43.104 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:43.104 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:43.104 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:43.104 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:43.104 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:43.104 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:43.104 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:43.104 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:43.104 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:43.104 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:43.367 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:43.367 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:43.367 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:43.367 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:02:43.367 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:02:43.367 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:02:43.367 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:43.367 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:43.367 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:43.367 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:43.367 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:43.367 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:43.367 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:43.367 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:43.367 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:43.367 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:43.367 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.367 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.367 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.367 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.367 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45949000 kB' 'MemAvailable: 49403180 kB' 'Buffers: 2704 kB' 'Cached: 9126160 kB' 'SwapCached: 0 kB' 'Active: 6181564 kB' 'Inactive: 3481172 kB' 'Active(anon): 5796348 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537088 kB' 'Mapped: 202532 kB' 'Shmem: 5262476 kB' 'KReclaimable: 162832 kB' 'Slab: 481900 kB' 'SReclaimable: 162832 kB' 'SUnreclaim: 319068 kB' 'KernelStack: 12608 kB' 'PageTables: 7656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 6959008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1334876 kB' 'DirectMap2M: 11167744 kB' 'DirectMap1G: 56623104 kB' 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.368 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45949376 kB' 'MemAvailable: 49403556 kB' 'Buffers: 2704 kB' 'Cached: 9126164 kB' 'SwapCached: 0 kB' 'Active: 6181668 kB' 'Inactive: 3481172 kB' 'Active(anon): 5796452 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537216 kB' 'Mapped: 202524 kB' 'Shmem: 5262480 kB' 'KReclaimable: 162832 kB' 'Slab: 481880 kB' 'SReclaimable: 162832 kB' 'SUnreclaim: 319048 kB' 'KernelStack: 12624 kB' 'PageTables: 7716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 6959028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1334876 kB' 'DirectMap2M: 11167744 kB' 'DirectMap1G: 56623104 kB' 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.369 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.370 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45949376 kB' 'MemAvailable: 49403556 kB' 'Buffers: 2704 kB' 'Cached: 9126176 kB' 'SwapCached: 0 kB' 'Active: 6181508 kB' 'Inactive: 3481172 kB' 'Active(anon): 5796292 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537004 kB' 'Mapped: 202524 kB' 'Shmem: 5262492 kB' 'KReclaimable: 162832 kB' 'Slab: 481880 kB' 'SReclaimable: 162832 kB' 'SUnreclaim: 319048 kB' 'KernelStack: 12608 kB' 'PageTables: 7660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 6959048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195920 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1334876 kB' 'DirectMap2M: 11167744 kB' 'DirectMap1G: 56623104 kB' 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.371 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.372 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.373 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.373 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.373 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.373 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.373 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.373 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.373 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.373 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.373 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.373 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.373 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.373 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.373 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.373 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:43.373 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:43.373 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:43.373 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:02:43.373 nr_hugepages=1536 00:02:43.373 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:43.373 resv_hugepages=0 00:02:43.373 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:43.373 surplus_hugepages=0 00:02:43.373 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:43.373 anon_hugepages=0 00:02:43.373 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:43.373 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:02:43.373 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:43.373 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:43.373 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:43.373 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:43.373 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.373 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.373 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.373 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.635 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.635 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.635 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.635 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.635 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45949376 kB' 'MemAvailable: 49403556 kB' 'Buffers: 2704 kB' 'Cached: 9126204 kB' 'SwapCached: 0 kB' 'Active: 6181596 kB' 'Inactive: 3481172 kB' 'Active(anon): 5796380 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537000 kB' 'Mapped: 202524 kB' 'Shmem: 5262520 kB' 'KReclaimable: 162832 kB' 'Slab: 481880 kB' 'SReclaimable: 162832 kB' 'SUnreclaim: 319048 kB' 'KernelStack: 12608 kB' 'PageTables: 7660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 6959068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195920 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1334876 kB' 'DirectMap2M: 11167744 kB' 'DirectMap1G: 56623104 kB' 00:02:43.635 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.635 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.635 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.635 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.635 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.635 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.635 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.635 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.635 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.635 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.635 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.635 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.636 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 28936268 kB' 'MemUsed: 3893616 kB' 'SwapCached: 0 kB' 'Active: 1660492 kB' 'Inactive: 99740 kB' 'Active(anon): 1554492 kB' 'Inactive(anon): 0 kB' 'Active(file): 106000 kB' 'Inactive(file): 99740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1499624 kB' 'Mapped: 82264 kB' 'AnonPages: 263680 kB' 'Shmem: 1293884 kB' 'KernelStack: 6936 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70048 kB' 'Slab: 258416 kB' 'SReclaimable: 70048 kB' 'SUnreclaim: 188368 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.637 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 17014244 kB' 'MemUsed: 10697580 kB' 'SwapCached: 0 kB' 'Active: 4521180 kB' 'Inactive: 3381432 kB' 'Active(anon): 4241964 kB' 'Inactive(anon): 0 kB' 'Active(file): 279216 kB' 'Inactive(file): 3381432 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7629324 kB' 'Mapped: 120260 kB' 'AnonPages: 273356 kB' 'Shmem: 3968676 kB' 'KernelStack: 5672 kB' 'PageTables: 3240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 92784 kB' 'Slab: 223464 kB' 'SReclaimable: 92784 kB' 'SUnreclaim: 130680 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.638 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.639 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.640 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.640 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.640 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.640 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.640 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.640 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.640 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.640 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.640 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:43.640 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.640 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.640 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.640 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:43.640 11:07:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:43.640 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:43.640 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:43.640 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:43.640 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:43.640 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:43.640 node0=512 expecting 512 00:02:43.640 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:43.640 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:43.640 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:43.640 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:02:43.640 node1=1024 expecting 1024 00:02:43.640 11:07:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:02:43.640 00:02:43.640 real 0m1.527s 00:02:43.640 user 0m0.638s 00:02:43.640 sys 0m0.850s 00:02:43.640 11:07:09 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:43.640 11:07:09 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:43.640 ************************************ 00:02:43.640 END TEST custom_alloc 00:02:43.640 ************************************ 00:02:43.640 11:07:09 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:43.640 11:07:09 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:02:43.640 11:07:09 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:43.640 11:07:09 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:43.640 11:07:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:43.640 ************************************ 00:02:43.640 START TEST no_shrink_alloc 00:02:43.640 ************************************ 00:02:43.640 11:07:09 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:02:43.640 11:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:02:43.640 11:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:43.640 11:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:43.640 11:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:02:43.640 11:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:43.640 11:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:43.640 11:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:43.640 11:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:43.640 11:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:43.640 11:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:43.640 11:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:43.640 11:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:43.640 11:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:43.640 11:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:43.640 11:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:43.640 11:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:43.640 11:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:43.640 11:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:43.640 11:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:43.640 11:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:02:43.640 11:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:43.640 11:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:45.021 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:45.021 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:45.021 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:45.021 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:45.021 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:45.021 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:45.021 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:45.021 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:45.021 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:45.021 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:45.021 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:45.021 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:45.021 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:45.021 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:45.021 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:45.021 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:45.021 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46983344 kB' 'MemAvailable: 50437548 kB' 'Buffers: 2704 kB' 'Cached: 9126296 kB' 'SwapCached: 0 kB' 'Active: 6183568 kB' 'Inactive: 3481172 kB' 'Active(anon): 5798352 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538964 kB' 'Mapped: 202544 kB' 'Shmem: 5262612 kB' 'KReclaimable: 162880 kB' 'Slab: 481896 kB' 'SReclaimable: 162880 kB' 'SUnreclaim: 319016 kB' 'KernelStack: 12624 kB' 'PageTables: 7692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 6960548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1334876 kB' 'DirectMap2M: 11167744 kB' 'DirectMap1G: 56623104 kB' 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.021 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.022 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46984956 kB' 'MemAvailable: 50439160 kB' 'Buffers: 2704 kB' 'Cached: 9126296 kB' 'SwapCached: 0 kB' 'Active: 6183740 kB' 'Inactive: 3481172 kB' 'Active(anon): 5798524 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539528 kB' 'Mapped: 202540 kB' 'Shmem: 5262612 kB' 'KReclaimable: 162880 kB' 'Slab: 481868 kB' 'SReclaimable: 162880 kB' 'SUnreclaim: 318988 kB' 'KernelStack: 13024 kB' 'PageTables: 8380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 6961704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1334876 kB' 'DirectMap2M: 11167744 kB' 'DirectMap1G: 56623104 kB' 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.023 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.024 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.024 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.024 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.024 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.024 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.024 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.024 11:07:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.024 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46985592 kB' 'MemAvailable: 50439796 kB' 'Buffers: 2704 kB' 'Cached: 9126316 kB' 'SwapCached: 0 kB' 'Active: 6183632 kB' 'Inactive: 3481172 kB' 'Active(anon): 5798416 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538932 kB' 'Mapped: 202548 kB' 'Shmem: 5262632 kB' 'KReclaimable: 162880 kB' 'Slab: 481936 kB' 'SReclaimable: 162880 kB' 'SUnreclaim: 319056 kB' 'KernelStack: 12880 kB' 'PageTables: 8520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 6961728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1334876 kB' 'DirectMap2M: 11167744 kB' 'DirectMap1G: 56623104 kB' 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.025 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.026 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.026 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.026 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.026 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.026 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.026 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.026 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.026 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.026 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.026 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.026 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.026 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.026 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.026 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.026 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.026 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.026 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.026 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.026 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.026 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.026 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.026 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.026 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.026 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.026 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.026 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.026 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.026 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.026 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.026 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.027 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.027 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.027 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.028 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:45.029 nr_hugepages=1024 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:45.029 resv_hugepages=0 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:45.029 surplus_hugepages=0 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:45.029 anon_hugepages=0 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46982992 kB' 'MemAvailable: 50437196 kB' 'Buffers: 2704 kB' 'Cached: 9126336 kB' 'SwapCached: 0 kB' 'Active: 6183956 kB' 'Inactive: 3481172 kB' 'Active(anon): 5798740 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539284 kB' 'Mapped: 202548 kB' 'Shmem: 5262652 kB' 'KReclaimable: 162880 kB' 'Slab: 481940 kB' 'SReclaimable: 162880 kB' 'SUnreclaim: 319060 kB' 'KernelStack: 12912 kB' 'PageTables: 8552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 6961748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1334876 kB' 'DirectMap2M: 11167744 kB' 'DirectMap1G: 56623104 kB' 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.029 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.030 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27876376 kB' 'MemUsed: 4953508 kB' 'SwapCached: 0 kB' 'Active: 1661420 kB' 'Inactive: 99740 kB' 'Active(anon): 1555420 kB' 'Inactive(anon): 0 kB' 'Active(file): 106000 kB' 'Inactive(file): 99740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1499632 kB' 'Mapped: 82264 kB' 'AnonPages: 264644 kB' 'Shmem: 1293892 kB' 'KernelStack: 7400 kB' 'PageTables: 5580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70096 kB' 'Slab: 258432 kB' 'SReclaimable: 70096 kB' 'SUnreclaim: 188336 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.031 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.032 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.033 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.033 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:45.033 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:45.033 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:45.033 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.033 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:45.033 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:45.033 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:45.033 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:45.033 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:45.033 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:45.033 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:45.033 node0=1024 expecting 1024 00:02:45.033 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:45.033 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:02:45.033 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:02:45.033 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:02:45.033 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:45.033 11:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:46.416 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:46.416 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:46.416 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:46.416 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:46.416 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:46.416 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:46.416 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:46.416 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:46.416 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:46.416 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:46.416 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:46.416 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:46.416 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:46.416 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:46.416 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:46.416 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:46.416 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:46.416 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:02:46.416 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:02:46.416 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:02:46.416 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:46.416 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:46.416 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:46.416 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:46.416 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:46.416 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:46.416 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:46.416 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:46.416 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:46.416 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:46.416 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:46.416 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.416 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.416 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46975764 kB' 'MemAvailable: 50429968 kB' 'Buffers: 2704 kB' 'Cached: 9126400 kB' 'SwapCached: 0 kB' 'Active: 6183264 kB' 'Inactive: 3481172 kB' 'Active(anon): 5798048 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538568 kB' 'Mapped: 202636 kB' 'Shmem: 5262716 kB' 'KReclaimable: 162880 kB' 'Slab: 481852 kB' 'SReclaimable: 162880 kB' 'SUnreclaim: 318972 kB' 'KernelStack: 12640 kB' 'PageTables: 7720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 6959724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1334876 kB' 'DirectMap2M: 11167744 kB' 'DirectMap1G: 56623104 kB' 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.417 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46975764 kB' 'MemAvailable: 50429968 kB' 'Buffers: 2704 kB' 'Cached: 9126404 kB' 'SwapCached: 0 kB' 'Active: 6182960 kB' 'Inactive: 3481172 kB' 'Active(anon): 5797744 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538256 kB' 'Mapped: 202636 kB' 'Shmem: 5262720 kB' 'KReclaimable: 162880 kB' 'Slab: 481852 kB' 'SReclaimable: 162880 kB' 'SUnreclaim: 318972 kB' 'KernelStack: 12640 kB' 'PageTables: 7724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 6959740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1334876 kB' 'DirectMap2M: 11167744 kB' 'DirectMap1G: 56623104 kB' 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.418 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.419 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46975764 kB' 'MemAvailable: 50429968 kB' 'Buffers: 2704 kB' 'Cached: 9126408 kB' 'SwapCached: 0 kB' 'Active: 6182524 kB' 'Inactive: 3481172 kB' 'Active(anon): 5797308 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537784 kB' 'Mapped: 202560 kB' 'Shmem: 5262724 kB' 'KReclaimable: 162880 kB' 'Slab: 481892 kB' 'SReclaimable: 162880 kB' 'SUnreclaim: 319012 kB' 'KernelStack: 12624 kB' 'PageTables: 7668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 6959764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1334876 kB' 'DirectMap2M: 11167744 kB' 'DirectMap1G: 56623104 kB' 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.420 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.421 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:46.422 nr_hugepages=1024 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:46.422 resv_hugepages=0 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:46.422 surplus_hugepages=0 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:46.422 anon_hugepages=0 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 46976068 kB' 'MemAvailable: 50430272 kB' 'Buffers: 2704 kB' 'Cached: 9126444 kB' 'SwapCached: 0 kB' 'Active: 6182792 kB' 'Inactive: 3481172 kB' 'Active(anon): 5797576 kB' 'Inactive(anon): 0 kB' 'Active(file): 385216 kB' 'Inactive(file): 3481172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538028 kB' 'Mapped: 202560 kB' 'Shmem: 5262760 kB' 'KReclaimable: 162880 kB' 'Slab: 481892 kB' 'SReclaimable: 162880 kB' 'SUnreclaim: 319012 kB' 'KernelStack: 12608 kB' 'PageTables: 7616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 6960808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 31296 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1334876 kB' 'DirectMap2M: 11167744 kB' 'DirectMap1G: 56623104 kB' 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.422 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.423 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27872480 kB' 'MemUsed: 4957404 kB' 'SwapCached: 0 kB' 'Active: 1663836 kB' 'Inactive: 99740 kB' 'Active(anon): 1557836 kB' 'Inactive(anon): 0 kB' 'Active(file): 106000 kB' 'Inactive(file): 99740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1499632 kB' 'Mapped: 82712 kB' 'AnonPages: 267040 kB' 'Shmem: 1293892 kB' 'KernelStack: 6952 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70096 kB' 'Slab: 258360 kB' 'SReclaimable: 70096 kB' 'SUnreclaim: 188264 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.424 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.684 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.684 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.684 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.684 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.684 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.684 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.684 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.684 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.684 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.684 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:46.685 node0=1024 expecting 1024 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:46.685 00:02:46.685 real 0m2.930s 00:02:46.685 user 0m1.243s 00:02:46.685 sys 0m1.615s 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:46.685 11:07:12 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:46.685 ************************************ 00:02:46.685 END TEST no_shrink_alloc 00:02:46.685 ************************************ 00:02:46.685 11:07:12 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:46.685 11:07:12 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:02:46.685 11:07:12 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:46.685 11:07:12 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:46.685 11:07:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:46.685 11:07:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:46.685 11:07:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:46.685 11:07:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:46.685 11:07:12 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:46.685 11:07:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:46.685 11:07:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:46.685 11:07:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:46.685 11:07:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:46.685 11:07:12 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:46.685 11:07:12 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:46.685 00:02:46.685 real 0m11.904s 00:02:46.685 user 0m4.546s 00:02:46.685 sys 0m6.276s 00:02:46.685 11:07:12 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:46.685 11:07:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:46.685 ************************************ 00:02:46.685 END TEST hugepages 00:02:46.685 ************************************ 00:02:46.685 11:07:12 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:02:46.685 11:07:12 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:46.685 11:07:12 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:46.685 11:07:12 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:46.685 11:07:12 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:46.685 ************************************ 00:02:46.685 START TEST driver 00:02:46.685 ************************************ 00:02:46.685 11:07:12 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:46.685 * Looking for test storage... 00:02:46.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:46.685 11:07:12 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:02:46.685 11:07:12 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:46.685 11:07:12 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:49.216 11:07:15 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:02:49.216 11:07:15 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:49.216 11:07:15 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:49.216 11:07:15 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:02:49.216 ************************************ 00:02:49.216 START TEST guess_driver 00:02:49.216 ************************************ 00:02:49.216 11:07:15 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:02:49.216 11:07:15 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:02:49.216 11:07:15 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:02:49.216 11:07:15 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:02:49.216 11:07:15 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:02:49.216 11:07:15 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:02:49.216 11:07:15 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:02:49.216 11:07:15 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:02:49.216 11:07:15 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:02:49.216 11:07:15 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:02:49.216 11:07:15 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:02:49.216 11:07:15 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:02:49.216 11:07:15 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:02:49.216 11:07:15 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:02:49.216 11:07:15 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:02:49.216 11:07:15 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:02:49.216 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:49.216 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:49.216 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:49.216 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:49.216 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:02:49.216 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:02:49.216 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:02:49.216 11:07:15 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:02:49.216 11:07:15 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:02:49.216 11:07:15 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:02:49.216 11:07:15 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:02:49.216 11:07:15 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:02:49.216 Looking for driver=vfio-pci 00:02:49.216 11:07:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.216 11:07:15 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:02:49.216 11:07:15 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:02:49.216 11:07:15 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:50.596 11:07:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:51.535 11:07:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:51.535 11:07:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:51.535 11:07:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:51.535 11:07:17 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:02:51.535 11:07:17 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:02:51.535 11:07:17 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:51.535 11:07:17 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:54.825 00:02:54.825 real 0m5.007s 00:02:54.825 user 0m1.137s 00:02:54.825 sys 0m1.924s 00:02:54.825 11:07:20 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:54.825 11:07:20 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:02:54.825 ************************************ 00:02:54.825 END TEST guess_driver 00:02:54.825 ************************************ 00:02:54.825 11:07:20 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:02:54.825 00:02:54.825 real 0m7.706s 00:02:54.825 user 0m1.773s 00:02:54.825 sys 0m2.976s 00:02:54.825 11:07:20 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:54.825 11:07:20 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:02:54.825 ************************************ 00:02:54.825 END TEST driver 00:02:54.825 ************************************ 00:02:54.825 11:07:20 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:02:54.825 11:07:20 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:02:54.825 11:07:20 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:54.825 11:07:20 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:54.825 11:07:20 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:54.825 ************************************ 00:02:54.825 START TEST devices 00:02:54.825 ************************************ 00:02:54.825 11:07:20 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:02:54.825 * Looking for test storage... 00:02:54.825 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:54.825 11:07:20 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:02:54.825 11:07:20 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:02:54.825 11:07:20 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:54.825 11:07:20 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:56.322 11:07:21 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:02:56.322 11:07:21 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:56.322 11:07:21 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:56.322 11:07:21 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:56.322 11:07:21 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:56.322 11:07:21 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:56.322 11:07:21 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:56.322 11:07:21 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:56.322 11:07:21 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:56.322 11:07:21 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:02:56.322 11:07:21 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:02:56.322 11:07:21 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:02:56.322 11:07:21 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:02:56.322 11:07:21 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:02:56.322 11:07:21 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:02:56.322 11:07:21 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:02:56.322 11:07:21 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:02:56.322 11:07:21 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:02:56.322 11:07:21 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:02:56.322 11:07:21 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:02:56.322 11:07:21 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:02:56.322 11:07:21 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:02:56.322 No valid GPT data, bailing 00:02:56.322 11:07:22 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:56.322 11:07:22 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:02:56.322 11:07:22 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:02:56.322 11:07:22 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:02:56.322 11:07:22 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:02:56.322 11:07:22 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:02:56.322 11:07:22 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:02:56.322 11:07:22 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:02:56.322 11:07:22 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:02:56.322 11:07:22 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:02:56.322 11:07:22 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:02:56.322 11:07:22 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:02:56.322 11:07:22 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:02:56.322 11:07:22 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:56.322 11:07:22 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:56.322 11:07:22 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:02:56.322 ************************************ 00:02:56.322 START TEST nvme_mount 00:02:56.322 ************************************ 00:02:56.322 11:07:22 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:02:56.322 11:07:22 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:02:56.322 11:07:22 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:02:56.322 11:07:22 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:56.322 11:07:22 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:56.322 11:07:22 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:02:56.322 11:07:22 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:02:56.322 11:07:22 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:02:56.322 11:07:22 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:02:56.322 11:07:22 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:02:56.322 11:07:22 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:02:56.322 11:07:22 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:02:56.322 11:07:22 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:02:56.322 11:07:22 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:56.322 11:07:22 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:02:56.322 11:07:22 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:02:56.322 11:07:22 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:56.322 11:07:22 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:02:56.322 11:07:22 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:02:56.322 11:07:22 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:02:56.947 Creating new GPT entries in memory. 00:02:56.947 GPT data structures destroyed! You may now partition the disk using fdisk or 00:02:56.947 other utilities. 00:02:56.947 11:07:23 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:02:56.947 11:07:23 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:56.947 11:07:23 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:02:56.947 11:07:23 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:02:56.947 11:07:23 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:02:58.328 Creating new GPT entries in memory. 00:02:58.328 The operation has completed successfully. 00:02:58.328 11:07:24 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:02:58.328 11:07:24 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:58.328 11:07:24 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 439613 00:02:58.328 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:58.328 11:07:24 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:02:58.328 11:07:24 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:58.328 11:07:24 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:02:58.328 11:07:24 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:02:58.328 11:07:24 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:58.328 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:58.328 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:58.328 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:02:58.328 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:58.328 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:58.328 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:02:58.328 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:58.328 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:02:58.328 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:02:58.328 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.328 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:58.328 11:07:24 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:02:58.328 11:07:24 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:02:58.328 11:07:24 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:59.268 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.268 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:02:59.268 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:02:59.268 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.268 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.268 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.268 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.268 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.268 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.268 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.269 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.269 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.269 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.269 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.269 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.269 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.269 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.269 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.269 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.269 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.269 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.269 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.269 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.269 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.269 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.269 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.269 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.269 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.269 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.269 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.269 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.269 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.269 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.269 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.269 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.269 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.529 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:59.529 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:02:59.529 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:59.529 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:59.529 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:59.529 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:02:59.529 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:59.529 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:59.529 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:02:59.529 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:02:59.529 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:02:59.529 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:02:59.529 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:02:59.788 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:02:59.788 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:02:59.788 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:02:59.788 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:02:59.788 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:02:59.788 11:07:25 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:02:59.788 11:07:25 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:59.788 11:07:25 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:02:59.788 11:07:25 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:02:59.788 11:07:25 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:59.788 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:59.788 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:59.788 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:02:59.788 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:59.788 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:59.788 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:02:59.788 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:59.788 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:02:59.788 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:02:59.788 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.788 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:59.788 11:07:25 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:02:59.788 11:07:25 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:02:59.788 11:07:25 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:01.169 11:07:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.169 11:07:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:01.169 11:07:26 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:01.169 11:07:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.169 11:07:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.169 11:07:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.169 11:07:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.169 11:07:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.169 11:07:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.169 11:07:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.169 11:07:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.169 11:07:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.169 11:07:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.169 11:07:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.169 11:07:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.169 11:07:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.169 11:07:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.169 11:07:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.169 11:07:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.169 11:07:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.169 11:07:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.169 11:07:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.169 11:07:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.169 11:07:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.169 11:07:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.169 11:07:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.169 11:07:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.169 11:07:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.169 11:07:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.169 11:07:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.169 11:07:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.169 11:07:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.169 11:07:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.169 11:07:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.169 11:07:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.169 11:07:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.169 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:01.169 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:01.169 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:01.169 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:01.169 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:01.169 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:01.169 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:03:01.169 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:01.169 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:01.169 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:01.169 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:01.169 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:01.169 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:01.169 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:01.169 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.169 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:01.169 11:07:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:01.169 11:07:27 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:01.169 11:07:27 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:02.547 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:02.547 00:03:02.547 real 0m6.508s 00:03:02.547 user 0m1.487s 00:03:02.547 sys 0m2.588s 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:02.547 11:07:28 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:02.547 ************************************ 00:03:02.547 END TEST nvme_mount 00:03:02.547 ************************************ 00:03:02.547 11:07:28 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:02.547 11:07:28 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:02.547 11:07:28 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:02.547 11:07:28 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:02.547 11:07:28 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:02.547 ************************************ 00:03:02.547 START TEST dm_mount 00:03:02.547 ************************************ 00:03:02.547 11:07:28 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:02.547 11:07:28 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:02.547 11:07:28 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:02.547 11:07:28 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:02.547 11:07:28 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:02.547 11:07:28 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:02.547 11:07:28 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:02.547 11:07:28 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:02.547 11:07:28 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:02.547 11:07:28 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:02.547 11:07:28 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:02.547 11:07:28 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:02.547 11:07:28 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:02.547 11:07:28 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:02.547 11:07:28 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:02.547 11:07:28 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:02.547 11:07:28 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:02.547 11:07:28 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:02.547 11:07:28 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:02.547 11:07:28 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:02.547 11:07:28 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:02.547 11:07:28 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:03.930 Creating new GPT entries in memory. 00:03:03.930 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:03.930 other utilities. 00:03:03.930 11:07:29 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:03.930 11:07:29 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:03.930 11:07:29 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:03.930 11:07:29 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:03.930 11:07:29 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:04.869 Creating new GPT entries in memory. 00:03:04.869 The operation has completed successfully. 00:03:04.869 11:07:30 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:04.869 11:07:30 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:04.869 11:07:30 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:04.869 11:07:30 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:04.869 11:07:30 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:05.808 The operation has completed successfully. 00:03:05.808 11:07:31 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:05.808 11:07:31 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:05.808 11:07:31 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 441993 00:03:05.808 11:07:31 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:05.808 11:07:31 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:05.808 11:07:31 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:05.808 11:07:31 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:05.808 11:07:31 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:05.808 11:07:31 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:05.808 11:07:31 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:05.808 11:07:31 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:05.808 11:07:31 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:05.808 11:07:31 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:05.808 11:07:31 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:05.808 11:07:31 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:05.808 11:07:31 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:05.808 11:07:31 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:05.808 11:07:31 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:05.808 11:07:31 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:05.808 11:07:31 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:05.808 11:07:31 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:05.808 11:07:31 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:05.808 11:07:31 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:05.808 11:07:31 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:05.808 11:07:31 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:05.809 11:07:31 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:05.809 11:07:31 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:05.809 11:07:31 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:05.809 11:07:31 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:05.809 11:07:31 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:05.809 11:07:31 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:05.809 11:07:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.809 11:07:31 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:05.809 11:07:31 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:05.809 11:07:31 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:05.809 11:07:31 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:06.743 11:07:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:06.743 11:07:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:06.743 11:07:32 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:06.743 11:07:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.743 11:07:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.002 11:07:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.002 11:07:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.002 11:07:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.002 11:07:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.002 11:07:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.002 11:07:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.002 11:07:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.002 11:07:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.002 11:07:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.002 11:07:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.002 11:07:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.002 11:07:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.002 11:07:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.002 11:07:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.002 11:07:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.002 11:07:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.002 11:07:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.002 11:07:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.002 11:07:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.002 11:07:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.002 11:07:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.002 11:07:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.002 11:07:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.002 11:07:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.002 11:07:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.002 11:07:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.002 11:07:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.002 11:07:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.002 11:07:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.002 11:07:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:07.002 11:07:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.002 11:07:33 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:07.002 11:07:33 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:07.002 11:07:33 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:07.002 11:07:33 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:07.002 11:07:33 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:07.002 11:07:33 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:07.002 11:07:33 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:07.002 11:07:33 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:07.002 11:07:33 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:07.002 11:07:33 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:07.002 11:07:33 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:07.002 11:07:33 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:07.002 11:07:33 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:07.002 11:07:33 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:07.002 11:07:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.002 11:07:33 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:07.002 11:07:33 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:07.002 11:07:33 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:07.002 11:07:33 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:08.414 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:08.414 00:03:08.414 real 0m5.853s 00:03:08.414 user 0m1.054s 00:03:08.414 sys 0m1.690s 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:08.414 11:07:34 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:08.414 ************************************ 00:03:08.414 END TEST dm_mount 00:03:08.414 ************************************ 00:03:08.414 11:07:34 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:08.414 11:07:34 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:08.414 11:07:34 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:08.414 11:07:34 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:08.414 11:07:34 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:08.414 11:07:34 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:08.414 11:07:34 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:08.414 11:07:34 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:08.673 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:08.673 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:08.673 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:08.673 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:08.673 11:07:34 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:08.673 11:07:34 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:08.673 11:07:34 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:08.673 11:07:34 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:08.673 11:07:34 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:08.673 11:07:34 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:08.673 11:07:34 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:08.673 00:03:08.673 real 0m14.391s 00:03:08.673 user 0m3.227s 00:03:08.673 sys 0m5.386s 00:03:08.673 11:07:34 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:08.673 11:07:34 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:08.673 ************************************ 00:03:08.673 END TEST devices 00:03:08.673 ************************************ 00:03:08.673 11:07:34 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:08.673 00:03:08.673 real 0m45.207s 00:03:08.673 user 0m12.962s 00:03:08.673 sys 0m20.356s 00:03:08.673 11:07:34 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:08.673 11:07:34 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:08.673 ************************************ 00:03:08.673 END TEST setup.sh 00:03:08.673 ************************************ 00:03:08.932 11:07:34 -- common/autotest_common.sh@1142 -- # return 0 00:03:08.932 11:07:34 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:10.310 Hugepages 00:03:10.310 node hugesize free / total 00:03:10.310 node0 1048576kB 0 / 0 00:03:10.310 node0 2048kB 2048 / 2048 00:03:10.310 node1 1048576kB 0 / 0 00:03:10.310 node1 2048kB 0 / 0 00:03:10.310 00:03:10.310 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:10.310 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:10.310 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:10.310 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:10.310 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:10.310 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:10.310 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:10.310 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:10.310 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:10.310 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:10.310 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:10.310 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:10.310 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:10.310 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:10.310 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:10.310 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:10.310 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:10.310 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:10.310 11:07:36 -- spdk/autotest.sh@130 -- # uname -s 00:03:10.310 11:07:36 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:10.310 11:07:36 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:10.310 11:07:36 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:11.247 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:11.247 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:11.247 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:11.247 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:11.247 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:11.247 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:11.247 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:11.507 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:11.507 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:11.507 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:11.507 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:11.507 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:11.507 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:11.507 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:11.507 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:11.507 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:12.448 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:12.448 11:07:38 -- common/autotest_common.sh@1532 -- # sleep 1 00:03:13.385 11:07:39 -- common/autotest_common.sh@1533 -- # bdfs=() 00:03:13.385 11:07:39 -- common/autotest_common.sh@1533 -- # local bdfs 00:03:13.385 11:07:39 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:03:13.385 11:07:39 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:03:13.385 11:07:39 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:13.385 11:07:39 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:13.385 11:07:39 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:13.385 11:07:39 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:13.385 11:07:39 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:13.643 11:07:39 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:13.643 11:07:39 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:03:13.643 11:07:39 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:14.583 Waiting for block devices as requested 00:03:14.842 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:14.842 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:15.102 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:15.102 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:15.102 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:15.363 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:15.363 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:15.363 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:15.363 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:15.621 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:15.621 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:15.621 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:15.621 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:15.881 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:15.881 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:15.881 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:16.141 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:16.141 11:07:42 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:03:16.141 11:07:42 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:16.141 11:07:42 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:03:16.141 11:07:42 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:03:16.141 11:07:42 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:16.141 11:07:42 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:16.141 11:07:42 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:16.141 11:07:42 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:03:16.141 11:07:42 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:03:16.141 11:07:42 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:03:16.141 11:07:42 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:03:16.141 11:07:42 -- common/autotest_common.sh@1545 -- # grep oacs 00:03:16.141 11:07:42 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:03:16.141 11:07:42 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:03:16.141 11:07:42 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:03:16.141 11:07:42 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:03:16.141 11:07:42 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:03:16.141 11:07:42 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:03:16.141 11:07:42 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:03:16.141 11:07:42 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:03:16.141 11:07:42 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:03:16.141 11:07:42 -- common/autotest_common.sh@1557 -- # continue 00:03:16.141 11:07:42 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:16.141 11:07:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:16.141 11:07:42 -- common/autotest_common.sh@10 -- # set +x 00:03:16.141 11:07:42 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:16.141 11:07:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:16.141 11:07:42 -- common/autotest_common.sh@10 -- # set +x 00:03:16.141 11:07:42 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:17.518 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:17.518 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:17.518 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:17.518 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:17.518 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:17.518 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:17.518 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:17.518 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:17.518 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:17.518 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:17.518 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:17.518 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:17.518 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:17.518 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:17.518 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:17.518 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:18.508 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:18.508 11:07:44 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:18.508 11:07:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:18.508 11:07:44 -- common/autotest_common.sh@10 -- # set +x 00:03:18.508 11:07:44 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:18.508 11:07:44 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:03:18.508 11:07:44 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:03:18.508 11:07:44 -- common/autotest_common.sh@1577 -- # bdfs=() 00:03:18.508 11:07:44 -- common/autotest_common.sh@1577 -- # local bdfs 00:03:18.508 11:07:44 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:03:18.508 11:07:44 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:18.508 11:07:44 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:18.508 11:07:44 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:18.508 11:07:44 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:18.508 11:07:44 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:18.772 11:07:44 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:18.772 11:07:44 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:03:18.772 11:07:44 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:03:18.772 11:07:44 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:03:18.772 11:07:44 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:03:18.772 11:07:44 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:18.772 11:07:44 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:03:18.772 11:07:44 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:03:18.772 11:07:44 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:03:18.772 11:07:44 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=447298 00:03:18.772 11:07:44 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:18.772 11:07:44 -- common/autotest_common.sh@1598 -- # waitforlisten 447298 00:03:18.772 11:07:44 -- common/autotest_common.sh@829 -- # '[' -z 447298 ']' 00:03:18.772 11:07:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:18.772 11:07:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:18.772 11:07:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:18.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:18.772 11:07:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:18.772 11:07:44 -- common/autotest_common.sh@10 -- # set +x 00:03:18.772 [2024-07-12 11:07:44.708860] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:03:18.772 [2024-07-12 11:07:44.708984] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid447298 ] 00:03:18.772 EAL: No free 2048 kB hugepages reported on node 1 00:03:18.772 [2024-07-12 11:07:44.769423] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:18.772 [2024-07-12 11:07:44.878922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:19.704 11:07:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:19.704 11:07:45 -- common/autotest_common.sh@862 -- # return 0 00:03:19.704 11:07:45 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:03:19.704 11:07:45 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:03:19.704 11:07:45 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:03:22.984 nvme0n1 00:03:22.984 11:07:48 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:22.984 [2024-07-12 11:07:48.938991] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:22.984 [2024-07-12 11:07:48.939047] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:22.984 request: 00:03:22.984 { 00:03:22.984 "nvme_ctrlr_name": "nvme0", 00:03:22.984 "password": "test", 00:03:22.984 "method": "bdev_nvme_opal_revert", 00:03:22.984 "req_id": 1 00:03:22.984 } 00:03:22.984 Got JSON-RPC error response 00:03:22.984 response: 00:03:22.984 { 00:03:22.984 "code": -32603, 00:03:22.984 "message": "Internal error" 00:03:22.984 } 00:03:22.984 11:07:48 -- common/autotest_common.sh@1604 -- # true 00:03:22.984 11:07:48 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:03:22.984 11:07:48 -- common/autotest_common.sh@1608 -- # killprocess 447298 00:03:22.984 11:07:48 -- common/autotest_common.sh@948 -- # '[' -z 447298 ']' 00:03:22.984 11:07:48 -- common/autotest_common.sh@952 -- # kill -0 447298 00:03:22.984 11:07:48 -- common/autotest_common.sh@953 -- # uname 00:03:22.984 11:07:48 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:22.984 11:07:48 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 447298 00:03:22.984 11:07:48 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:22.984 11:07:48 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:22.984 11:07:48 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 447298' 00:03:22.984 killing process with pid 447298 00:03:22.984 11:07:48 -- common/autotest_common.sh@967 -- # kill 447298 00:03:22.984 11:07:48 -- common/autotest_common.sh@972 -- # wait 447298 00:03:24.880 11:07:50 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:24.880 11:07:50 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:24.880 11:07:50 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:24.880 11:07:50 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:24.880 11:07:50 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:24.880 11:07:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:24.880 11:07:50 -- common/autotest_common.sh@10 -- # set +x 00:03:24.880 11:07:50 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:03:24.880 11:07:50 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:24.880 11:07:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:24.880 11:07:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:24.880 11:07:50 -- common/autotest_common.sh@10 -- # set +x 00:03:24.880 ************************************ 00:03:24.880 START TEST env 00:03:24.880 ************************************ 00:03:24.880 11:07:50 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:24.880 * Looking for test storage... 00:03:24.880 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:24.880 11:07:50 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:24.880 11:07:50 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:24.880 11:07:50 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:24.880 11:07:50 env -- common/autotest_common.sh@10 -- # set +x 00:03:24.880 ************************************ 00:03:24.880 START TEST env_memory 00:03:24.880 ************************************ 00:03:24.880 11:07:50 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:24.880 00:03:24.880 00:03:24.880 CUnit - A unit testing framework for C - Version 2.1-3 00:03:24.880 http://cunit.sourceforge.net/ 00:03:24.880 00:03:24.880 00:03:24.880 Suite: memory 00:03:24.880 Test: alloc and free memory map ...[2024-07-12 11:07:50.915506] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:24.880 passed 00:03:24.880 Test: mem map translation ...[2024-07-12 11:07:50.935203] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:24.880 [2024-07-12 11:07:50.935225] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:24.880 [2024-07-12 11:07:50.935275] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:24.880 [2024-07-12 11:07:50.935287] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:24.880 passed 00:03:24.880 Test: mem map registration ...[2024-07-12 11:07:50.975979] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:24.880 [2024-07-12 11:07:50.975999] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:24.880 passed 00:03:25.139 Test: mem map adjacent registrations ...passed 00:03:25.139 00:03:25.139 Run Summary: Type Total Ran Passed Failed Inactive 00:03:25.139 suites 1 1 n/a 0 0 00:03:25.139 tests 4 4 4 0 0 00:03:25.139 asserts 152 152 152 0 n/a 00:03:25.139 00:03:25.139 Elapsed time = 0.142 seconds 00:03:25.139 00:03:25.139 real 0m0.150s 00:03:25.139 user 0m0.141s 00:03:25.139 sys 0m0.009s 00:03:25.139 11:07:51 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:25.139 11:07:51 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:25.139 ************************************ 00:03:25.139 END TEST env_memory 00:03:25.139 ************************************ 00:03:25.139 11:07:51 env -- common/autotest_common.sh@1142 -- # return 0 00:03:25.139 11:07:51 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:25.139 11:07:51 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:25.139 11:07:51 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:25.139 11:07:51 env -- common/autotest_common.sh@10 -- # set +x 00:03:25.139 ************************************ 00:03:25.139 START TEST env_vtophys 00:03:25.139 ************************************ 00:03:25.139 11:07:51 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:25.139 EAL: lib.eal log level changed from notice to debug 00:03:25.139 EAL: Detected lcore 0 as core 0 on socket 0 00:03:25.139 EAL: Detected lcore 1 as core 1 on socket 0 00:03:25.139 EAL: Detected lcore 2 as core 2 on socket 0 00:03:25.139 EAL: Detected lcore 3 as core 3 on socket 0 00:03:25.139 EAL: Detected lcore 4 as core 4 on socket 0 00:03:25.139 EAL: Detected lcore 5 as core 5 on socket 0 00:03:25.139 EAL: Detected lcore 6 as core 8 on socket 0 00:03:25.139 EAL: Detected lcore 7 as core 9 on socket 0 00:03:25.139 EAL: Detected lcore 8 as core 10 on socket 0 00:03:25.139 EAL: Detected lcore 9 as core 11 on socket 0 00:03:25.139 EAL: Detected lcore 10 as core 12 on socket 0 00:03:25.139 EAL: Detected lcore 11 as core 13 on socket 0 00:03:25.139 EAL: Detected lcore 12 as core 0 on socket 1 00:03:25.139 EAL: Detected lcore 13 as core 1 on socket 1 00:03:25.139 EAL: Detected lcore 14 as core 2 on socket 1 00:03:25.139 EAL: Detected lcore 15 as core 3 on socket 1 00:03:25.139 EAL: Detected lcore 16 as core 4 on socket 1 00:03:25.139 EAL: Detected lcore 17 as core 5 on socket 1 00:03:25.139 EAL: Detected lcore 18 as core 8 on socket 1 00:03:25.140 EAL: Detected lcore 19 as core 9 on socket 1 00:03:25.140 EAL: Detected lcore 20 as core 10 on socket 1 00:03:25.140 EAL: Detected lcore 21 as core 11 on socket 1 00:03:25.140 EAL: Detected lcore 22 as core 12 on socket 1 00:03:25.140 EAL: Detected lcore 23 as core 13 on socket 1 00:03:25.140 EAL: Detected lcore 24 as core 0 on socket 0 00:03:25.140 EAL: Detected lcore 25 as core 1 on socket 0 00:03:25.140 EAL: Detected lcore 26 as core 2 on socket 0 00:03:25.140 EAL: Detected lcore 27 as core 3 on socket 0 00:03:25.140 EAL: Detected lcore 28 as core 4 on socket 0 00:03:25.140 EAL: Detected lcore 29 as core 5 on socket 0 00:03:25.140 EAL: Detected lcore 30 as core 8 on socket 0 00:03:25.140 EAL: Detected lcore 31 as core 9 on socket 0 00:03:25.140 EAL: Detected lcore 32 as core 10 on socket 0 00:03:25.140 EAL: Detected lcore 33 as core 11 on socket 0 00:03:25.140 EAL: Detected lcore 34 as core 12 on socket 0 00:03:25.140 EAL: Detected lcore 35 as core 13 on socket 0 00:03:25.140 EAL: Detected lcore 36 as core 0 on socket 1 00:03:25.140 EAL: Detected lcore 37 as core 1 on socket 1 00:03:25.140 EAL: Detected lcore 38 as core 2 on socket 1 00:03:25.140 EAL: Detected lcore 39 as core 3 on socket 1 00:03:25.140 EAL: Detected lcore 40 as core 4 on socket 1 00:03:25.140 EAL: Detected lcore 41 as core 5 on socket 1 00:03:25.140 EAL: Detected lcore 42 as core 8 on socket 1 00:03:25.140 EAL: Detected lcore 43 as core 9 on socket 1 00:03:25.140 EAL: Detected lcore 44 as core 10 on socket 1 00:03:25.140 EAL: Detected lcore 45 as core 11 on socket 1 00:03:25.140 EAL: Detected lcore 46 as core 12 on socket 1 00:03:25.140 EAL: Detected lcore 47 as core 13 on socket 1 00:03:25.140 EAL: Maximum logical cores by configuration: 128 00:03:25.140 EAL: Detected CPU lcores: 48 00:03:25.140 EAL: Detected NUMA nodes: 2 00:03:25.140 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:25.140 EAL: Detected shared linkage of DPDK 00:03:25.140 EAL: No shared files mode enabled, IPC will be disabled 00:03:25.140 EAL: Bus pci wants IOVA as 'DC' 00:03:25.140 EAL: Buses did not request a specific IOVA mode. 00:03:25.140 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:25.140 EAL: Selected IOVA mode 'VA' 00:03:25.140 EAL: No free 2048 kB hugepages reported on node 1 00:03:25.140 EAL: Probing VFIO support... 00:03:25.140 EAL: IOMMU type 1 (Type 1) is supported 00:03:25.140 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:25.140 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:25.140 EAL: VFIO support initialized 00:03:25.140 EAL: Ask a virtual area of 0x2e000 bytes 00:03:25.140 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:25.140 EAL: Setting up physically contiguous memory... 00:03:25.140 EAL: Setting maximum number of open files to 524288 00:03:25.140 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:25.140 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:25.140 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:25.140 EAL: Ask a virtual area of 0x61000 bytes 00:03:25.140 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:25.140 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:25.140 EAL: Ask a virtual area of 0x400000000 bytes 00:03:25.140 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:25.140 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:25.140 EAL: Ask a virtual area of 0x61000 bytes 00:03:25.140 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:25.140 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:25.140 EAL: Ask a virtual area of 0x400000000 bytes 00:03:25.140 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:25.140 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:25.140 EAL: Ask a virtual area of 0x61000 bytes 00:03:25.140 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:25.140 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:25.140 EAL: Ask a virtual area of 0x400000000 bytes 00:03:25.140 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:25.140 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:25.140 EAL: Ask a virtual area of 0x61000 bytes 00:03:25.140 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:25.140 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:25.140 EAL: Ask a virtual area of 0x400000000 bytes 00:03:25.140 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:25.140 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:25.140 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:25.140 EAL: Ask a virtual area of 0x61000 bytes 00:03:25.140 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:25.140 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:25.140 EAL: Ask a virtual area of 0x400000000 bytes 00:03:25.140 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:25.140 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:25.140 EAL: Ask a virtual area of 0x61000 bytes 00:03:25.140 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:25.140 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:25.140 EAL: Ask a virtual area of 0x400000000 bytes 00:03:25.140 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:25.140 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:25.140 EAL: Ask a virtual area of 0x61000 bytes 00:03:25.140 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:25.140 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:25.140 EAL: Ask a virtual area of 0x400000000 bytes 00:03:25.140 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:25.140 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:25.140 EAL: Ask a virtual area of 0x61000 bytes 00:03:25.140 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:25.140 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:25.140 EAL: Ask a virtual area of 0x400000000 bytes 00:03:25.140 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:25.140 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:25.140 EAL: Hugepages will be freed exactly as allocated. 00:03:25.140 EAL: No shared files mode enabled, IPC is disabled 00:03:25.140 EAL: No shared files mode enabled, IPC is disabled 00:03:25.140 EAL: TSC frequency is ~2700000 KHz 00:03:25.140 EAL: Main lcore 0 is ready (tid=7f8a39036a00;cpuset=[0]) 00:03:25.140 EAL: Trying to obtain current memory policy. 00:03:25.140 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:25.140 EAL: Restoring previous memory policy: 0 00:03:25.140 EAL: request: mp_malloc_sync 00:03:25.140 EAL: No shared files mode enabled, IPC is disabled 00:03:25.140 EAL: Heap on socket 0 was expanded by 2MB 00:03:25.140 EAL: No shared files mode enabled, IPC is disabled 00:03:25.140 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:25.140 EAL: Mem event callback 'spdk:(nil)' registered 00:03:25.140 00:03:25.140 00:03:25.140 CUnit - A unit testing framework for C - Version 2.1-3 00:03:25.140 http://cunit.sourceforge.net/ 00:03:25.140 00:03:25.140 00:03:25.140 Suite: components_suite 00:03:25.140 Test: vtophys_malloc_test ...passed 00:03:25.140 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:25.140 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:25.140 EAL: Restoring previous memory policy: 4 00:03:25.140 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.140 EAL: request: mp_malloc_sync 00:03:25.140 EAL: No shared files mode enabled, IPC is disabled 00:03:25.140 EAL: Heap on socket 0 was expanded by 4MB 00:03:25.140 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.140 EAL: request: mp_malloc_sync 00:03:25.140 EAL: No shared files mode enabled, IPC is disabled 00:03:25.140 EAL: Heap on socket 0 was shrunk by 4MB 00:03:25.140 EAL: Trying to obtain current memory policy. 00:03:25.140 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:25.140 EAL: Restoring previous memory policy: 4 00:03:25.140 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.140 EAL: request: mp_malloc_sync 00:03:25.140 EAL: No shared files mode enabled, IPC is disabled 00:03:25.140 EAL: Heap on socket 0 was expanded by 6MB 00:03:25.140 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.140 EAL: request: mp_malloc_sync 00:03:25.140 EAL: No shared files mode enabled, IPC is disabled 00:03:25.140 EAL: Heap on socket 0 was shrunk by 6MB 00:03:25.140 EAL: Trying to obtain current memory policy. 00:03:25.140 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:25.140 EAL: Restoring previous memory policy: 4 00:03:25.140 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.140 EAL: request: mp_malloc_sync 00:03:25.140 EAL: No shared files mode enabled, IPC is disabled 00:03:25.140 EAL: Heap on socket 0 was expanded by 10MB 00:03:25.140 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.140 EAL: request: mp_malloc_sync 00:03:25.140 EAL: No shared files mode enabled, IPC is disabled 00:03:25.140 EAL: Heap on socket 0 was shrunk by 10MB 00:03:25.140 EAL: Trying to obtain current memory policy. 00:03:25.140 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:25.140 EAL: Restoring previous memory policy: 4 00:03:25.140 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.140 EAL: request: mp_malloc_sync 00:03:25.140 EAL: No shared files mode enabled, IPC is disabled 00:03:25.140 EAL: Heap on socket 0 was expanded by 18MB 00:03:25.140 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.140 EAL: request: mp_malloc_sync 00:03:25.140 EAL: No shared files mode enabled, IPC is disabled 00:03:25.140 EAL: Heap on socket 0 was shrunk by 18MB 00:03:25.140 EAL: Trying to obtain current memory policy. 00:03:25.140 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:25.140 EAL: Restoring previous memory policy: 4 00:03:25.140 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.140 EAL: request: mp_malloc_sync 00:03:25.140 EAL: No shared files mode enabled, IPC is disabled 00:03:25.140 EAL: Heap on socket 0 was expanded by 34MB 00:03:25.140 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.140 EAL: request: mp_malloc_sync 00:03:25.140 EAL: No shared files mode enabled, IPC is disabled 00:03:25.140 EAL: Heap on socket 0 was shrunk by 34MB 00:03:25.140 EAL: Trying to obtain current memory policy. 00:03:25.140 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:25.141 EAL: Restoring previous memory policy: 4 00:03:25.141 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.141 EAL: request: mp_malloc_sync 00:03:25.141 EAL: No shared files mode enabled, IPC is disabled 00:03:25.141 EAL: Heap on socket 0 was expanded by 66MB 00:03:25.141 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.141 EAL: request: mp_malloc_sync 00:03:25.141 EAL: No shared files mode enabled, IPC is disabled 00:03:25.141 EAL: Heap on socket 0 was shrunk by 66MB 00:03:25.141 EAL: Trying to obtain current memory policy. 00:03:25.141 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:25.141 EAL: Restoring previous memory policy: 4 00:03:25.141 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.141 EAL: request: mp_malloc_sync 00:03:25.141 EAL: No shared files mode enabled, IPC is disabled 00:03:25.141 EAL: Heap on socket 0 was expanded by 130MB 00:03:25.398 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.398 EAL: request: mp_malloc_sync 00:03:25.398 EAL: No shared files mode enabled, IPC is disabled 00:03:25.398 EAL: Heap on socket 0 was shrunk by 130MB 00:03:25.398 EAL: Trying to obtain current memory policy. 00:03:25.398 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:25.398 EAL: Restoring previous memory policy: 4 00:03:25.398 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.398 EAL: request: mp_malloc_sync 00:03:25.398 EAL: No shared files mode enabled, IPC is disabled 00:03:25.398 EAL: Heap on socket 0 was expanded by 258MB 00:03:25.398 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.398 EAL: request: mp_malloc_sync 00:03:25.398 EAL: No shared files mode enabled, IPC is disabled 00:03:25.398 EAL: Heap on socket 0 was shrunk by 258MB 00:03:25.398 EAL: Trying to obtain current memory policy. 00:03:25.398 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:25.656 EAL: Restoring previous memory policy: 4 00:03:25.656 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.656 EAL: request: mp_malloc_sync 00:03:25.656 EAL: No shared files mode enabled, IPC is disabled 00:03:25.656 EAL: Heap on socket 0 was expanded by 514MB 00:03:25.656 EAL: Calling mem event callback 'spdk:(nil)' 00:03:25.914 EAL: request: mp_malloc_sync 00:03:25.914 EAL: No shared files mode enabled, IPC is disabled 00:03:25.914 EAL: Heap on socket 0 was shrunk by 514MB 00:03:25.914 EAL: Trying to obtain current memory policy. 00:03:25.914 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:26.171 EAL: Restoring previous memory policy: 4 00:03:26.171 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.171 EAL: request: mp_malloc_sync 00:03:26.171 EAL: No shared files mode enabled, IPC is disabled 00:03:26.171 EAL: Heap on socket 0 was expanded by 1026MB 00:03:26.171 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.429 EAL: request: mp_malloc_sync 00:03:26.429 EAL: No shared files mode enabled, IPC is disabled 00:03:26.429 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:26.429 passed 00:03:26.430 00:03:26.430 Run Summary: Type Total Ran Passed Failed Inactive 00:03:26.430 suites 1 1 n/a 0 0 00:03:26.430 tests 2 2 2 0 0 00:03:26.430 asserts 497 497 497 0 n/a 00:03:26.430 00:03:26.430 Elapsed time = 1.322 seconds 00:03:26.430 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.430 EAL: request: mp_malloc_sync 00:03:26.430 EAL: No shared files mode enabled, IPC is disabled 00:03:26.430 EAL: Heap on socket 0 was shrunk by 2MB 00:03:26.430 EAL: No shared files mode enabled, IPC is disabled 00:03:26.430 EAL: No shared files mode enabled, IPC is disabled 00:03:26.430 EAL: No shared files mode enabled, IPC is disabled 00:03:26.430 00:03:26.430 real 0m1.438s 00:03:26.430 user 0m0.839s 00:03:26.430 sys 0m0.563s 00:03:26.430 11:07:52 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:26.430 11:07:52 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:26.430 ************************************ 00:03:26.430 END TEST env_vtophys 00:03:26.430 ************************************ 00:03:26.430 11:07:52 env -- common/autotest_common.sh@1142 -- # return 0 00:03:26.430 11:07:52 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:26.430 11:07:52 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:26.430 11:07:52 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.430 11:07:52 env -- common/autotest_common.sh@10 -- # set +x 00:03:26.688 ************************************ 00:03:26.688 START TEST env_pci 00:03:26.688 ************************************ 00:03:26.688 11:07:52 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:26.688 00:03:26.688 00:03:26.688 CUnit - A unit testing framework for C - Version 2.1-3 00:03:26.688 http://cunit.sourceforge.net/ 00:03:26.688 00:03:26.688 00:03:26.688 Suite: pci 00:03:26.688 Test: pci_hook ...[2024-07-12 11:07:52.575561] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 448315 has claimed it 00:03:26.688 EAL: Cannot find device (10000:00:01.0) 00:03:26.688 EAL: Failed to attach device on primary process 00:03:26.688 passed 00:03:26.688 00:03:26.688 Run Summary: Type Total Ran Passed Failed Inactive 00:03:26.688 suites 1 1 n/a 0 0 00:03:26.688 tests 1 1 1 0 0 00:03:26.688 asserts 25 25 25 0 n/a 00:03:26.688 00:03:26.688 Elapsed time = 0.021 seconds 00:03:26.688 00:03:26.688 real 0m0.034s 00:03:26.688 user 0m0.009s 00:03:26.688 sys 0m0.025s 00:03:26.688 11:07:52 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:26.688 11:07:52 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:26.688 ************************************ 00:03:26.688 END TEST env_pci 00:03:26.688 ************************************ 00:03:26.688 11:07:52 env -- common/autotest_common.sh@1142 -- # return 0 00:03:26.688 11:07:52 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:26.688 11:07:52 env -- env/env.sh@15 -- # uname 00:03:26.688 11:07:52 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:26.688 11:07:52 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:26.688 11:07:52 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:26.688 11:07:52 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:03:26.688 11:07:52 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.688 11:07:52 env -- common/autotest_common.sh@10 -- # set +x 00:03:26.688 ************************************ 00:03:26.688 START TEST env_dpdk_post_init 00:03:26.688 ************************************ 00:03:26.688 11:07:52 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:26.688 EAL: Detected CPU lcores: 48 00:03:26.688 EAL: Detected NUMA nodes: 2 00:03:26.688 EAL: Detected shared linkage of DPDK 00:03:26.688 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:26.688 EAL: Selected IOVA mode 'VA' 00:03:26.688 EAL: No free 2048 kB hugepages reported on node 1 00:03:26.688 EAL: VFIO support initialized 00:03:26.688 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:26.688 EAL: Using IOMMU type 1 (Type 1) 00:03:26.688 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:26.688 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:26.688 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:26.688 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:26.688 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:26.948 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:26.948 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:26.948 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:26.948 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:26.948 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:26.948 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:26.948 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:26.948 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:26.948 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:26.948 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:26.948 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:27.886 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:03:31.161 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:03:31.161 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:03:31.161 Starting DPDK initialization... 00:03:31.161 Starting SPDK post initialization... 00:03:31.161 SPDK NVMe probe 00:03:31.161 Attaching to 0000:88:00.0 00:03:31.161 Attached to 0000:88:00.0 00:03:31.161 Cleaning up... 00:03:31.161 00:03:31.161 real 0m4.376s 00:03:31.161 user 0m3.254s 00:03:31.161 sys 0m0.183s 00:03:31.161 11:07:57 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:31.161 11:07:57 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:31.161 ************************************ 00:03:31.161 END TEST env_dpdk_post_init 00:03:31.161 ************************************ 00:03:31.161 11:07:57 env -- common/autotest_common.sh@1142 -- # return 0 00:03:31.161 11:07:57 env -- env/env.sh@26 -- # uname 00:03:31.161 11:07:57 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:31.161 11:07:57 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:31.161 11:07:57 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:31.161 11:07:57 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:31.161 11:07:57 env -- common/autotest_common.sh@10 -- # set +x 00:03:31.161 ************************************ 00:03:31.161 START TEST env_mem_callbacks 00:03:31.161 ************************************ 00:03:31.161 11:07:57 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:31.161 EAL: Detected CPU lcores: 48 00:03:31.161 EAL: Detected NUMA nodes: 2 00:03:31.161 EAL: Detected shared linkage of DPDK 00:03:31.161 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:31.161 EAL: Selected IOVA mode 'VA' 00:03:31.161 EAL: No free 2048 kB hugepages reported on node 1 00:03:31.161 EAL: VFIO support initialized 00:03:31.161 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:31.161 00:03:31.161 00:03:31.161 CUnit - A unit testing framework for C - Version 2.1-3 00:03:31.161 http://cunit.sourceforge.net/ 00:03:31.161 00:03:31.161 00:03:31.161 Suite: memory 00:03:31.161 Test: test ... 00:03:31.161 register 0x200000200000 2097152 00:03:31.161 malloc 3145728 00:03:31.161 register 0x200000400000 4194304 00:03:31.161 buf 0x200000500000 len 3145728 PASSED 00:03:31.161 malloc 64 00:03:31.161 buf 0x2000004fff40 len 64 PASSED 00:03:31.161 malloc 4194304 00:03:31.161 register 0x200000800000 6291456 00:03:31.161 buf 0x200000a00000 len 4194304 PASSED 00:03:31.161 free 0x200000500000 3145728 00:03:31.161 free 0x2000004fff40 64 00:03:31.161 unregister 0x200000400000 4194304 PASSED 00:03:31.161 free 0x200000a00000 4194304 00:03:31.161 unregister 0x200000800000 6291456 PASSED 00:03:31.161 malloc 8388608 00:03:31.161 register 0x200000400000 10485760 00:03:31.161 buf 0x200000600000 len 8388608 PASSED 00:03:31.161 free 0x200000600000 8388608 00:03:31.161 unregister 0x200000400000 10485760 PASSED 00:03:31.161 passed 00:03:31.161 00:03:31.161 Run Summary: Type Total Ran Passed Failed Inactive 00:03:31.161 suites 1 1 n/a 0 0 00:03:31.161 tests 1 1 1 0 0 00:03:31.161 asserts 15 15 15 0 n/a 00:03:31.161 00:03:31.161 Elapsed time = 0.004 seconds 00:03:31.161 00:03:31.161 real 0m0.049s 00:03:31.161 user 0m0.015s 00:03:31.161 sys 0m0.034s 00:03:31.161 11:07:57 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:31.161 11:07:57 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:31.161 ************************************ 00:03:31.161 END TEST env_mem_callbacks 00:03:31.161 ************************************ 00:03:31.161 11:07:57 env -- common/autotest_common.sh@1142 -- # return 0 00:03:31.161 00:03:31.161 real 0m6.355s 00:03:31.161 user 0m4.385s 00:03:31.161 sys 0m1.012s 00:03:31.161 11:07:57 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:31.161 11:07:57 env -- common/autotest_common.sh@10 -- # set +x 00:03:31.161 ************************************ 00:03:31.161 END TEST env 00:03:31.161 ************************************ 00:03:31.161 11:07:57 -- common/autotest_common.sh@1142 -- # return 0 00:03:31.161 11:07:57 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:31.161 11:07:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:31.161 11:07:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:31.161 11:07:57 -- common/autotest_common.sh@10 -- # set +x 00:03:31.161 ************************************ 00:03:31.161 START TEST rpc 00:03:31.161 ************************************ 00:03:31.161 11:07:57 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:31.161 * Looking for test storage... 00:03:31.161 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:31.161 11:07:57 rpc -- rpc/rpc.sh@65 -- # spdk_pid=448967 00:03:31.161 11:07:57 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:31.161 11:07:57 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:31.161 11:07:57 rpc -- rpc/rpc.sh@67 -- # waitforlisten 448967 00:03:31.162 11:07:57 rpc -- common/autotest_common.sh@829 -- # '[' -z 448967 ']' 00:03:31.162 11:07:57 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:31.162 11:07:57 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:31.162 11:07:57 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:31.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:31.162 11:07:57 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:31.162 11:07:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:31.420 [2024-07-12 11:07:57.308735] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:03:31.420 [2024-07-12 11:07:57.308833] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid448967 ] 00:03:31.420 EAL: No free 2048 kB hugepages reported on node 1 00:03:31.420 [2024-07-12 11:07:57.367688] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:31.420 [2024-07-12 11:07:57.474471] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:31.420 [2024-07-12 11:07:57.474529] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 448967' to capture a snapshot of events at runtime. 00:03:31.420 [2024-07-12 11:07:57.474552] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:31.420 [2024-07-12 11:07:57.474563] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:31.420 [2024-07-12 11:07:57.474574] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid448967 for offline analysis/debug. 00:03:31.420 [2024-07-12 11:07:57.474606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:31.678 11:07:57 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:31.678 11:07:57 rpc -- common/autotest_common.sh@862 -- # return 0 00:03:31.678 11:07:57 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:31.678 11:07:57 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:31.678 11:07:57 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:31.678 11:07:57 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:31.678 11:07:57 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:31.678 11:07:57 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:31.678 11:07:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:31.678 ************************************ 00:03:31.678 START TEST rpc_integrity 00:03:31.678 ************************************ 00:03:31.678 11:07:57 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:03:31.678 11:07:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:31.678 11:07:57 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:31.678 11:07:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.678 11:07:57 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:31.678 11:07:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:31.678 11:07:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:31.678 11:07:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:31.678 11:07:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:31.678 11:07:57 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:31.678 11:07:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.678 11:07:57 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:31.678 11:07:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:31.678 11:07:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:31.678 11:07:57 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:31.678 11:07:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.936 11:07:57 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:31.936 11:07:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:31.936 { 00:03:31.936 "name": "Malloc0", 00:03:31.936 "aliases": [ 00:03:31.936 "f9970948-6e06-4fe2-978d-c77d1d52a3eb" 00:03:31.936 ], 00:03:31.936 "product_name": "Malloc disk", 00:03:31.936 "block_size": 512, 00:03:31.936 "num_blocks": 16384, 00:03:31.936 "uuid": "f9970948-6e06-4fe2-978d-c77d1d52a3eb", 00:03:31.936 "assigned_rate_limits": { 00:03:31.936 "rw_ios_per_sec": 0, 00:03:31.936 "rw_mbytes_per_sec": 0, 00:03:31.936 "r_mbytes_per_sec": 0, 00:03:31.937 "w_mbytes_per_sec": 0 00:03:31.937 }, 00:03:31.937 "claimed": false, 00:03:31.937 "zoned": false, 00:03:31.937 "supported_io_types": { 00:03:31.937 "read": true, 00:03:31.937 "write": true, 00:03:31.937 "unmap": true, 00:03:31.937 "flush": true, 00:03:31.937 "reset": true, 00:03:31.937 "nvme_admin": false, 00:03:31.937 "nvme_io": false, 00:03:31.937 "nvme_io_md": false, 00:03:31.937 "write_zeroes": true, 00:03:31.937 "zcopy": true, 00:03:31.937 "get_zone_info": false, 00:03:31.937 "zone_management": false, 00:03:31.937 "zone_append": false, 00:03:31.937 "compare": false, 00:03:31.937 "compare_and_write": false, 00:03:31.937 "abort": true, 00:03:31.937 "seek_hole": false, 00:03:31.937 "seek_data": false, 00:03:31.937 "copy": true, 00:03:31.937 "nvme_iov_md": false 00:03:31.937 }, 00:03:31.937 "memory_domains": [ 00:03:31.937 { 00:03:31.937 "dma_device_id": "system", 00:03:31.937 "dma_device_type": 1 00:03:31.937 }, 00:03:31.937 { 00:03:31.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:31.937 "dma_device_type": 2 00:03:31.937 } 00:03:31.937 ], 00:03:31.937 "driver_specific": {} 00:03:31.937 } 00:03:31.937 ]' 00:03:31.937 11:07:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:31.937 11:07:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:31.937 11:07:57 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:31.937 11:07:57 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:31.937 11:07:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.937 [2024-07-12 11:07:57.851604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:31.937 [2024-07-12 11:07:57.851649] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:31.937 [2024-07-12 11:07:57.851671] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xcc9d50 00:03:31.937 [2024-07-12 11:07:57.851684] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:31.937 [2024-07-12 11:07:57.853122] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:31.937 [2024-07-12 11:07:57.853148] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:31.937 Passthru0 00:03:31.937 11:07:57 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:31.937 11:07:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:31.937 11:07:57 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:31.937 11:07:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.937 11:07:57 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:31.937 11:07:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:31.937 { 00:03:31.937 "name": "Malloc0", 00:03:31.937 "aliases": [ 00:03:31.937 "f9970948-6e06-4fe2-978d-c77d1d52a3eb" 00:03:31.937 ], 00:03:31.937 "product_name": "Malloc disk", 00:03:31.937 "block_size": 512, 00:03:31.937 "num_blocks": 16384, 00:03:31.937 "uuid": "f9970948-6e06-4fe2-978d-c77d1d52a3eb", 00:03:31.937 "assigned_rate_limits": { 00:03:31.937 "rw_ios_per_sec": 0, 00:03:31.937 "rw_mbytes_per_sec": 0, 00:03:31.937 "r_mbytes_per_sec": 0, 00:03:31.937 "w_mbytes_per_sec": 0 00:03:31.937 }, 00:03:31.937 "claimed": true, 00:03:31.937 "claim_type": "exclusive_write", 00:03:31.937 "zoned": false, 00:03:31.937 "supported_io_types": { 00:03:31.937 "read": true, 00:03:31.937 "write": true, 00:03:31.937 "unmap": true, 00:03:31.937 "flush": true, 00:03:31.937 "reset": true, 00:03:31.937 "nvme_admin": false, 00:03:31.937 "nvme_io": false, 00:03:31.937 "nvme_io_md": false, 00:03:31.937 "write_zeroes": true, 00:03:31.937 "zcopy": true, 00:03:31.937 "get_zone_info": false, 00:03:31.937 "zone_management": false, 00:03:31.937 "zone_append": false, 00:03:31.937 "compare": false, 00:03:31.937 "compare_and_write": false, 00:03:31.937 "abort": true, 00:03:31.937 "seek_hole": false, 00:03:31.937 "seek_data": false, 00:03:31.937 "copy": true, 00:03:31.937 "nvme_iov_md": false 00:03:31.937 }, 00:03:31.937 "memory_domains": [ 00:03:31.937 { 00:03:31.937 "dma_device_id": "system", 00:03:31.937 "dma_device_type": 1 00:03:31.937 }, 00:03:31.937 { 00:03:31.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:31.937 "dma_device_type": 2 00:03:31.937 } 00:03:31.937 ], 00:03:31.937 "driver_specific": {} 00:03:31.937 }, 00:03:31.937 { 00:03:31.937 "name": "Passthru0", 00:03:31.937 "aliases": [ 00:03:31.937 "8f64756b-c942-5294-9412-8278725d9761" 00:03:31.937 ], 00:03:31.937 "product_name": "passthru", 00:03:31.937 "block_size": 512, 00:03:31.937 "num_blocks": 16384, 00:03:31.937 "uuid": "8f64756b-c942-5294-9412-8278725d9761", 00:03:31.937 "assigned_rate_limits": { 00:03:31.937 "rw_ios_per_sec": 0, 00:03:31.937 "rw_mbytes_per_sec": 0, 00:03:31.937 "r_mbytes_per_sec": 0, 00:03:31.937 "w_mbytes_per_sec": 0 00:03:31.937 }, 00:03:31.937 "claimed": false, 00:03:31.937 "zoned": false, 00:03:31.937 "supported_io_types": { 00:03:31.937 "read": true, 00:03:31.937 "write": true, 00:03:31.937 "unmap": true, 00:03:31.937 "flush": true, 00:03:31.937 "reset": true, 00:03:31.937 "nvme_admin": false, 00:03:31.937 "nvme_io": false, 00:03:31.937 "nvme_io_md": false, 00:03:31.937 "write_zeroes": true, 00:03:31.937 "zcopy": true, 00:03:31.937 "get_zone_info": false, 00:03:31.937 "zone_management": false, 00:03:31.937 "zone_append": false, 00:03:31.937 "compare": false, 00:03:31.937 "compare_and_write": false, 00:03:31.937 "abort": true, 00:03:31.937 "seek_hole": false, 00:03:31.937 "seek_data": false, 00:03:31.937 "copy": true, 00:03:31.937 "nvme_iov_md": false 00:03:31.937 }, 00:03:31.937 "memory_domains": [ 00:03:31.937 { 00:03:31.937 "dma_device_id": "system", 00:03:31.937 "dma_device_type": 1 00:03:31.937 }, 00:03:31.937 { 00:03:31.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:31.937 "dma_device_type": 2 00:03:31.937 } 00:03:31.937 ], 00:03:31.937 "driver_specific": { 00:03:31.937 "passthru": { 00:03:31.937 "name": "Passthru0", 00:03:31.937 "base_bdev_name": "Malloc0" 00:03:31.937 } 00:03:31.937 } 00:03:31.937 } 00:03:31.937 ]' 00:03:31.937 11:07:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:31.937 11:07:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:31.937 11:07:57 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:31.937 11:07:57 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:31.937 11:07:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.937 11:07:57 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:31.937 11:07:57 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:31.937 11:07:57 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:31.937 11:07:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.937 11:07:57 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:31.937 11:07:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:31.937 11:07:57 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:31.937 11:07:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.937 11:07:57 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:31.937 11:07:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:31.937 11:07:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:31.937 11:07:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:31.937 00:03:31.937 real 0m0.226s 00:03:31.937 user 0m0.142s 00:03:31.937 sys 0m0.020s 00:03:31.937 11:07:57 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:31.937 11:07:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.937 ************************************ 00:03:31.937 END TEST rpc_integrity 00:03:31.937 ************************************ 00:03:31.937 11:07:57 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:31.937 11:07:57 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:31.937 11:07:57 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:31.937 11:07:57 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:31.938 11:07:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:31.938 ************************************ 00:03:31.938 START TEST rpc_plugins 00:03:31.938 ************************************ 00:03:31.938 11:07:58 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:03:31.938 11:07:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:31.938 11:07:58 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:31.938 11:07:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:31.938 11:07:58 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:31.938 11:07:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:31.938 11:07:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:31.938 11:07:58 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:31.938 11:07:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:31.938 11:07:58 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:31.938 11:07:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:31.938 { 00:03:31.938 "name": "Malloc1", 00:03:31.938 "aliases": [ 00:03:31.938 "b3f1e7f6-df3c-4919-8906-7625c14b4cb8" 00:03:31.938 ], 00:03:31.938 "product_name": "Malloc disk", 00:03:31.938 "block_size": 4096, 00:03:31.938 "num_blocks": 256, 00:03:31.938 "uuid": "b3f1e7f6-df3c-4919-8906-7625c14b4cb8", 00:03:31.938 "assigned_rate_limits": { 00:03:31.938 "rw_ios_per_sec": 0, 00:03:31.938 "rw_mbytes_per_sec": 0, 00:03:31.938 "r_mbytes_per_sec": 0, 00:03:31.938 "w_mbytes_per_sec": 0 00:03:31.938 }, 00:03:31.938 "claimed": false, 00:03:31.938 "zoned": false, 00:03:31.938 "supported_io_types": { 00:03:31.938 "read": true, 00:03:31.938 "write": true, 00:03:31.938 "unmap": true, 00:03:31.938 "flush": true, 00:03:31.938 "reset": true, 00:03:31.938 "nvme_admin": false, 00:03:31.938 "nvme_io": false, 00:03:31.938 "nvme_io_md": false, 00:03:31.938 "write_zeroes": true, 00:03:31.938 "zcopy": true, 00:03:31.938 "get_zone_info": false, 00:03:31.938 "zone_management": false, 00:03:31.938 "zone_append": false, 00:03:31.938 "compare": false, 00:03:31.938 "compare_and_write": false, 00:03:31.938 "abort": true, 00:03:31.938 "seek_hole": false, 00:03:31.938 "seek_data": false, 00:03:31.938 "copy": true, 00:03:31.938 "nvme_iov_md": false 00:03:31.938 }, 00:03:31.938 "memory_domains": [ 00:03:31.938 { 00:03:31.938 "dma_device_id": "system", 00:03:31.938 "dma_device_type": 1 00:03:31.938 }, 00:03:31.938 { 00:03:31.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:31.938 "dma_device_type": 2 00:03:31.938 } 00:03:31.938 ], 00:03:31.938 "driver_specific": {} 00:03:31.938 } 00:03:31.938 ]' 00:03:31.938 11:07:58 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:32.195 11:07:58 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:32.195 11:07:58 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:32.195 11:07:58 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:32.195 11:07:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:32.195 11:07:58 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:32.195 11:07:58 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:32.195 11:07:58 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:32.195 11:07:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:32.195 11:07:58 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:32.195 11:07:58 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:32.195 11:07:58 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:32.195 11:07:58 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:32.195 00:03:32.195 real 0m0.102s 00:03:32.195 user 0m0.068s 00:03:32.195 sys 0m0.008s 00:03:32.195 11:07:58 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:32.195 11:07:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:32.195 ************************************ 00:03:32.195 END TEST rpc_plugins 00:03:32.195 ************************************ 00:03:32.195 11:07:58 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:32.195 11:07:58 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:32.195 11:07:58 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:32.195 11:07:58 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:32.195 11:07:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:32.195 ************************************ 00:03:32.195 START TEST rpc_trace_cmd_test 00:03:32.195 ************************************ 00:03:32.195 11:07:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:03:32.195 11:07:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:32.195 11:07:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:32.196 11:07:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:32.196 11:07:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:32.196 11:07:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:32.196 11:07:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:32.196 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid448967", 00:03:32.196 "tpoint_group_mask": "0x8", 00:03:32.196 "iscsi_conn": { 00:03:32.196 "mask": "0x2", 00:03:32.196 "tpoint_mask": "0x0" 00:03:32.196 }, 00:03:32.196 "scsi": { 00:03:32.196 "mask": "0x4", 00:03:32.196 "tpoint_mask": "0x0" 00:03:32.196 }, 00:03:32.196 "bdev": { 00:03:32.196 "mask": "0x8", 00:03:32.196 "tpoint_mask": "0xffffffffffffffff" 00:03:32.196 }, 00:03:32.196 "nvmf_rdma": { 00:03:32.196 "mask": "0x10", 00:03:32.196 "tpoint_mask": "0x0" 00:03:32.196 }, 00:03:32.196 "nvmf_tcp": { 00:03:32.196 "mask": "0x20", 00:03:32.196 "tpoint_mask": "0x0" 00:03:32.196 }, 00:03:32.196 "ftl": { 00:03:32.196 "mask": "0x40", 00:03:32.196 "tpoint_mask": "0x0" 00:03:32.196 }, 00:03:32.196 "blobfs": { 00:03:32.196 "mask": "0x80", 00:03:32.196 "tpoint_mask": "0x0" 00:03:32.196 }, 00:03:32.196 "dsa": { 00:03:32.196 "mask": "0x200", 00:03:32.196 "tpoint_mask": "0x0" 00:03:32.196 }, 00:03:32.196 "thread": { 00:03:32.196 "mask": "0x400", 00:03:32.196 "tpoint_mask": "0x0" 00:03:32.196 }, 00:03:32.196 "nvme_pcie": { 00:03:32.196 "mask": "0x800", 00:03:32.196 "tpoint_mask": "0x0" 00:03:32.196 }, 00:03:32.196 "iaa": { 00:03:32.196 "mask": "0x1000", 00:03:32.196 "tpoint_mask": "0x0" 00:03:32.196 }, 00:03:32.196 "nvme_tcp": { 00:03:32.196 "mask": "0x2000", 00:03:32.196 "tpoint_mask": "0x0" 00:03:32.196 }, 00:03:32.196 "bdev_nvme": { 00:03:32.196 "mask": "0x4000", 00:03:32.196 "tpoint_mask": "0x0" 00:03:32.196 }, 00:03:32.196 "sock": { 00:03:32.196 "mask": "0x8000", 00:03:32.196 "tpoint_mask": "0x0" 00:03:32.196 }, 00:03:32.196 "blob": { 00:03:32.196 "mask": "0x10000", 00:03:32.196 "tpoint_mask": "0x0" 00:03:32.196 } 00:03:32.196 }' 00:03:32.196 11:07:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:32.196 11:07:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 17 -gt 2 ']' 00:03:32.196 11:07:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:32.196 11:07:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:32.196 11:07:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:32.196 11:07:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:32.196 11:07:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:32.196 11:07:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:32.196 11:07:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:32.455 11:07:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:32.455 00:03:32.455 real 0m0.183s 00:03:32.455 user 0m0.162s 00:03:32.455 sys 0m0.011s 00:03:32.455 11:07:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:32.455 11:07:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:32.455 ************************************ 00:03:32.455 END TEST rpc_trace_cmd_test 00:03:32.455 ************************************ 00:03:32.455 11:07:58 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:32.455 11:07:58 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:32.455 11:07:58 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:32.455 11:07:58 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:32.455 11:07:58 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:32.455 11:07:58 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:32.455 11:07:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:32.455 ************************************ 00:03:32.455 START TEST rpc_daemon_integrity 00:03:32.455 ************************************ 00:03:32.455 11:07:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:03:32.455 11:07:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:32.455 11:07:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:32.455 11:07:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:32.455 11:07:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:32.455 11:07:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:32.455 11:07:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:32.455 11:07:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:32.455 11:07:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:32.455 11:07:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:32.455 11:07:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:32.455 11:07:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:32.455 11:07:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:32.455 11:07:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:32.455 11:07:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:32.455 11:07:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:32.455 11:07:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:32.455 11:07:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:32.455 { 00:03:32.455 "name": "Malloc2", 00:03:32.455 "aliases": [ 00:03:32.455 "d967f05b-b12f-411a-9f9f-3ba3b118838a" 00:03:32.455 ], 00:03:32.455 "product_name": "Malloc disk", 00:03:32.455 "block_size": 512, 00:03:32.455 "num_blocks": 16384, 00:03:32.455 "uuid": "d967f05b-b12f-411a-9f9f-3ba3b118838a", 00:03:32.455 "assigned_rate_limits": { 00:03:32.455 "rw_ios_per_sec": 0, 00:03:32.455 "rw_mbytes_per_sec": 0, 00:03:32.455 "r_mbytes_per_sec": 0, 00:03:32.455 "w_mbytes_per_sec": 0 00:03:32.455 }, 00:03:32.455 "claimed": false, 00:03:32.455 "zoned": false, 00:03:32.455 "supported_io_types": { 00:03:32.455 "read": true, 00:03:32.455 "write": true, 00:03:32.455 "unmap": true, 00:03:32.455 "flush": true, 00:03:32.455 "reset": true, 00:03:32.455 "nvme_admin": false, 00:03:32.455 "nvme_io": false, 00:03:32.455 "nvme_io_md": false, 00:03:32.455 "write_zeroes": true, 00:03:32.455 "zcopy": true, 00:03:32.455 "get_zone_info": false, 00:03:32.455 "zone_management": false, 00:03:32.455 "zone_append": false, 00:03:32.455 "compare": false, 00:03:32.455 "compare_and_write": false, 00:03:32.455 "abort": true, 00:03:32.455 "seek_hole": false, 00:03:32.455 "seek_data": false, 00:03:32.455 "copy": true, 00:03:32.455 "nvme_iov_md": false 00:03:32.455 }, 00:03:32.455 "memory_domains": [ 00:03:32.455 { 00:03:32.455 "dma_device_id": "system", 00:03:32.455 "dma_device_type": 1 00:03:32.455 }, 00:03:32.455 { 00:03:32.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:32.455 "dma_device_type": 2 00:03:32.455 } 00:03:32.455 ], 00:03:32.455 "driver_specific": {} 00:03:32.455 } 00:03:32.455 ]' 00:03:32.455 11:07:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:32.455 11:07:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:32.455 11:07:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:32.455 11:07:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:32.455 11:07:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:32.455 [2024-07-12 11:07:58.501434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:32.455 [2024-07-12 11:07:58.501478] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:32.455 [2024-07-12 11:07:58.501500] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xccac00 00:03:32.455 [2024-07-12 11:07:58.501512] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:32.455 [2024-07-12 11:07:58.502649] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:32.455 [2024-07-12 11:07:58.502674] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:32.455 Passthru0 00:03:32.455 11:07:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:32.455 11:07:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:32.455 11:07:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:32.455 11:07:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:32.455 11:07:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:32.455 11:07:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:32.455 { 00:03:32.455 "name": "Malloc2", 00:03:32.455 "aliases": [ 00:03:32.455 "d967f05b-b12f-411a-9f9f-3ba3b118838a" 00:03:32.455 ], 00:03:32.455 "product_name": "Malloc disk", 00:03:32.455 "block_size": 512, 00:03:32.455 "num_blocks": 16384, 00:03:32.455 "uuid": "d967f05b-b12f-411a-9f9f-3ba3b118838a", 00:03:32.455 "assigned_rate_limits": { 00:03:32.455 "rw_ios_per_sec": 0, 00:03:32.455 "rw_mbytes_per_sec": 0, 00:03:32.455 "r_mbytes_per_sec": 0, 00:03:32.455 "w_mbytes_per_sec": 0 00:03:32.455 }, 00:03:32.455 "claimed": true, 00:03:32.455 "claim_type": "exclusive_write", 00:03:32.455 "zoned": false, 00:03:32.455 "supported_io_types": { 00:03:32.455 "read": true, 00:03:32.455 "write": true, 00:03:32.455 "unmap": true, 00:03:32.455 "flush": true, 00:03:32.455 "reset": true, 00:03:32.455 "nvme_admin": false, 00:03:32.455 "nvme_io": false, 00:03:32.455 "nvme_io_md": false, 00:03:32.455 "write_zeroes": true, 00:03:32.455 "zcopy": true, 00:03:32.455 "get_zone_info": false, 00:03:32.455 "zone_management": false, 00:03:32.455 "zone_append": false, 00:03:32.455 "compare": false, 00:03:32.455 "compare_and_write": false, 00:03:32.455 "abort": true, 00:03:32.455 "seek_hole": false, 00:03:32.455 "seek_data": false, 00:03:32.455 "copy": true, 00:03:32.455 "nvme_iov_md": false 00:03:32.455 }, 00:03:32.455 "memory_domains": [ 00:03:32.455 { 00:03:32.455 "dma_device_id": "system", 00:03:32.455 "dma_device_type": 1 00:03:32.455 }, 00:03:32.455 { 00:03:32.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:32.455 "dma_device_type": 2 00:03:32.455 } 00:03:32.455 ], 00:03:32.455 "driver_specific": {} 00:03:32.455 }, 00:03:32.455 { 00:03:32.455 "name": "Passthru0", 00:03:32.455 "aliases": [ 00:03:32.455 "ec5b2d5a-a0c9-5251-975b-9202d68ac401" 00:03:32.455 ], 00:03:32.455 "product_name": "passthru", 00:03:32.455 "block_size": 512, 00:03:32.455 "num_blocks": 16384, 00:03:32.455 "uuid": "ec5b2d5a-a0c9-5251-975b-9202d68ac401", 00:03:32.455 "assigned_rate_limits": { 00:03:32.455 "rw_ios_per_sec": 0, 00:03:32.455 "rw_mbytes_per_sec": 0, 00:03:32.455 "r_mbytes_per_sec": 0, 00:03:32.455 "w_mbytes_per_sec": 0 00:03:32.455 }, 00:03:32.455 "claimed": false, 00:03:32.455 "zoned": false, 00:03:32.455 "supported_io_types": { 00:03:32.455 "read": true, 00:03:32.455 "write": true, 00:03:32.455 "unmap": true, 00:03:32.455 "flush": true, 00:03:32.455 "reset": true, 00:03:32.455 "nvme_admin": false, 00:03:32.455 "nvme_io": false, 00:03:32.455 "nvme_io_md": false, 00:03:32.455 "write_zeroes": true, 00:03:32.455 "zcopy": true, 00:03:32.455 "get_zone_info": false, 00:03:32.455 "zone_management": false, 00:03:32.455 "zone_append": false, 00:03:32.455 "compare": false, 00:03:32.455 "compare_and_write": false, 00:03:32.455 "abort": true, 00:03:32.455 "seek_hole": false, 00:03:32.455 "seek_data": false, 00:03:32.455 "copy": true, 00:03:32.455 "nvme_iov_md": false 00:03:32.455 }, 00:03:32.455 "memory_domains": [ 00:03:32.455 { 00:03:32.455 "dma_device_id": "system", 00:03:32.455 "dma_device_type": 1 00:03:32.455 }, 00:03:32.455 { 00:03:32.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:32.455 "dma_device_type": 2 00:03:32.455 } 00:03:32.455 ], 00:03:32.455 "driver_specific": { 00:03:32.455 "passthru": { 00:03:32.455 "name": "Passthru0", 00:03:32.455 "base_bdev_name": "Malloc2" 00:03:32.455 } 00:03:32.455 } 00:03:32.455 } 00:03:32.455 ]' 00:03:32.455 11:07:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:32.455 11:07:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:32.455 11:07:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:32.455 11:07:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:32.455 11:07:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:32.455 11:07:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:32.455 11:07:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:32.455 11:07:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:32.455 11:07:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:32.455 11:07:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:32.456 11:07:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:32.456 11:07:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:32.456 11:07:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:32.456 11:07:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:32.456 11:07:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:32.456 11:07:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:32.713 11:07:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:32.713 00:03:32.713 real 0m0.225s 00:03:32.713 user 0m0.149s 00:03:32.713 sys 0m0.019s 00:03:32.713 11:07:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:32.713 11:07:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:32.713 ************************************ 00:03:32.713 END TEST rpc_daemon_integrity 00:03:32.713 ************************************ 00:03:32.713 11:07:58 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:32.713 11:07:58 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:32.713 11:07:58 rpc -- rpc/rpc.sh@84 -- # killprocess 448967 00:03:32.713 11:07:58 rpc -- common/autotest_common.sh@948 -- # '[' -z 448967 ']' 00:03:32.713 11:07:58 rpc -- common/autotest_common.sh@952 -- # kill -0 448967 00:03:32.713 11:07:58 rpc -- common/autotest_common.sh@953 -- # uname 00:03:32.713 11:07:58 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:32.713 11:07:58 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 448967 00:03:32.713 11:07:58 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:32.713 11:07:58 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:32.713 11:07:58 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 448967' 00:03:32.713 killing process with pid 448967 00:03:32.713 11:07:58 rpc -- common/autotest_common.sh@967 -- # kill 448967 00:03:32.713 11:07:58 rpc -- common/autotest_common.sh@972 -- # wait 448967 00:03:32.971 00:03:32.971 real 0m1.891s 00:03:32.971 user 0m2.342s 00:03:32.971 sys 0m0.582s 00:03:32.971 11:07:59 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:32.971 11:07:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:32.971 ************************************ 00:03:32.971 END TEST rpc 00:03:32.971 ************************************ 00:03:33.230 11:07:59 -- common/autotest_common.sh@1142 -- # return 0 00:03:33.230 11:07:59 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:33.230 11:07:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:33.230 11:07:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.230 11:07:59 -- common/autotest_common.sh@10 -- # set +x 00:03:33.230 ************************************ 00:03:33.230 START TEST skip_rpc 00:03:33.230 ************************************ 00:03:33.230 11:07:59 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:33.230 * Looking for test storage... 00:03:33.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:33.230 11:07:59 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:33.230 11:07:59 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:33.230 11:07:59 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:33.230 11:07:59 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:33.230 11:07:59 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.230 11:07:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:33.230 ************************************ 00:03:33.230 START TEST skip_rpc 00:03:33.230 ************************************ 00:03:33.230 11:07:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:03:33.230 11:07:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=449405 00:03:33.230 11:07:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:33.230 11:07:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:33.230 11:07:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:33.230 [2024-07-12 11:07:59.275113] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:03:33.230 [2024-07-12 11:07:59.275190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid449405 ] 00:03:33.230 EAL: No free 2048 kB hugepages reported on node 1 00:03:33.230 [2024-07-12 11:07:59.328925] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:33.488 [2024-07-12 11:07:59.434037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:38.745 11:08:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:38.745 11:08:04 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:03:38.745 11:08:04 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:38.745 11:08:04 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:03:38.745 11:08:04 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:38.745 11:08:04 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:03:38.745 11:08:04 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:38.745 11:08:04 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:03:38.745 11:08:04 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:38.745 11:08:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:38.745 11:08:04 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:03:38.745 11:08:04 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:03:38.745 11:08:04 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:38.745 11:08:04 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:03:38.745 11:08:04 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:38.745 11:08:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:38.745 11:08:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 449405 00:03:38.745 11:08:04 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 449405 ']' 00:03:38.745 11:08:04 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 449405 00:03:38.745 11:08:04 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:03:38.745 11:08:04 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:38.745 11:08:04 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 449405 00:03:38.745 11:08:04 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:38.745 11:08:04 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:38.746 11:08:04 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 449405' 00:03:38.746 killing process with pid 449405 00:03:38.746 11:08:04 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 449405 00:03:38.746 11:08:04 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 449405 00:03:38.746 00:03:38.746 real 0m5.457s 00:03:38.746 user 0m5.163s 00:03:38.746 sys 0m0.294s 00:03:38.746 11:08:04 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:38.746 11:08:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:38.746 ************************************ 00:03:38.746 END TEST skip_rpc 00:03:38.746 ************************************ 00:03:38.746 11:08:04 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:38.746 11:08:04 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:38.746 11:08:04 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:38.746 11:08:04 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:38.746 11:08:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:38.746 ************************************ 00:03:38.746 START TEST skip_rpc_with_json 00:03:38.746 ************************************ 00:03:38.746 11:08:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:03:38.746 11:08:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:38.746 11:08:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=450095 00:03:38.746 11:08:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:38.746 11:08:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:38.746 11:08:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 450095 00:03:38.746 11:08:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 450095 ']' 00:03:38.746 11:08:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:38.746 11:08:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:38.746 11:08:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:38.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:38.746 11:08:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:38.746 11:08:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:38.746 [2024-07-12 11:08:04.783978] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:03:38.746 [2024-07-12 11:08:04.784064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid450095 ] 00:03:38.746 EAL: No free 2048 kB hugepages reported on node 1 00:03:38.746 [2024-07-12 11:08:04.839793] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:39.003 [2024-07-12 11:08:04.950745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:39.260 11:08:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:39.260 11:08:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:03:39.260 11:08:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:39.260 11:08:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:39.260 11:08:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:39.260 [2024-07-12 11:08:05.197378] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:39.260 request: 00:03:39.260 { 00:03:39.260 "trtype": "tcp", 00:03:39.260 "method": "nvmf_get_transports", 00:03:39.260 "req_id": 1 00:03:39.260 } 00:03:39.260 Got JSON-RPC error response 00:03:39.260 response: 00:03:39.260 { 00:03:39.260 "code": -19, 00:03:39.260 "message": "No such device" 00:03:39.260 } 00:03:39.260 11:08:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:03:39.260 11:08:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:39.260 11:08:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:39.260 11:08:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:39.260 [2024-07-12 11:08:05.205487] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:39.260 11:08:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:39.260 11:08:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:39.260 11:08:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:39.260 11:08:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:39.260 11:08:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:39.260 11:08:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:39.260 { 00:03:39.260 "subsystems": [ 00:03:39.260 { 00:03:39.260 "subsystem": "vfio_user_target", 00:03:39.260 "config": null 00:03:39.260 }, 00:03:39.260 { 00:03:39.260 "subsystem": "keyring", 00:03:39.260 "config": [] 00:03:39.260 }, 00:03:39.260 { 00:03:39.260 "subsystem": "iobuf", 00:03:39.260 "config": [ 00:03:39.260 { 00:03:39.260 "method": "iobuf_set_options", 00:03:39.260 "params": { 00:03:39.260 "small_pool_count": 8192, 00:03:39.260 "large_pool_count": 1024, 00:03:39.260 "small_bufsize": 8192, 00:03:39.260 "large_bufsize": 135168 00:03:39.260 } 00:03:39.260 } 00:03:39.260 ] 00:03:39.260 }, 00:03:39.260 { 00:03:39.260 "subsystem": "sock", 00:03:39.260 "config": [ 00:03:39.260 { 00:03:39.260 "method": "sock_set_default_impl", 00:03:39.260 "params": { 00:03:39.260 "impl_name": "posix" 00:03:39.260 } 00:03:39.260 }, 00:03:39.260 { 00:03:39.260 "method": "sock_impl_set_options", 00:03:39.260 "params": { 00:03:39.260 "impl_name": "ssl", 00:03:39.260 "recv_buf_size": 4096, 00:03:39.260 "send_buf_size": 4096, 00:03:39.260 "enable_recv_pipe": true, 00:03:39.260 "enable_quickack": false, 00:03:39.260 "enable_placement_id": 0, 00:03:39.260 "enable_zerocopy_send_server": true, 00:03:39.260 "enable_zerocopy_send_client": false, 00:03:39.260 "zerocopy_threshold": 0, 00:03:39.260 "tls_version": 0, 00:03:39.260 "enable_ktls": false 00:03:39.260 } 00:03:39.260 }, 00:03:39.260 { 00:03:39.260 "method": "sock_impl_set_options", 00:03:39.261 "params": { 00:03:39.261 "impl_name": "posix", 00:03:39.261 "recv_buf_size": 2097152, 00:03:39.261 "send_buf_size": 2097152, 00:03:39.261 "enable_recv_pipe": true, 00:03:39.261 "enable_quickack": false, 00:03:39.261 "enable_placement_id": 0, 00:03:39.261 "enable_zerocopy_send_server": true, 00:03:39.261 "enable_zerocopy_send_client": false, 00:03:39.261 "zerocopy_threshold": 0, 00:03:39.261 "tls_version": 0, 00:03:39.261 "enable_ktls": false 00:03:39.261 } 00:03:39.261 } 00:03:39.261 ] 00:03:39.261 }, 00:03:39.261 { 00:03:39.261 "subsystem": "vmd", 00:03:39.261 "config": [] 00:03:39.261 }, 00:03:39.261 { 00:03:39.261 "subsystem": "accel", 00:03:39.261 "config": [ 00:03:39.261 { 00:03:39.261 "method": "accel_set_options", 00:03:39.261 "params": { 00:03:39.261 "small_cache_size": 128, 00:03:39.261 "large_cache_size": 16, 00:03:39.261 "task_count": 2048, 00:03:39.261 "sequence_count": 2048, 00:03:39.261 "buf_count": 2048 00:03:39.261 } 00:03:39.261 } 00:03:39.261 ] 00:03:39.261 }, 00:03:39.261 { 00:03:39.261 "subsystem": "bdev", 00:03:39.261 "config": [ 00:03:39.261 { 00:03:39.261 "method": "bdev_set_options", 00:03:39.261 "params": { 00:03:39.261 "bdev_io_pool_size": 65535, 00:03:39.261 "bdev_io_cache_size": 256, 00:03:39.261 "bdev_auto_examine": true, 00:03:39.261 "iobuf_small_cache_size": 128, 00:03:39.261 "iobuf_large_cache_size": 16 00:03:39.261 } 00:03:39.261 }, 00:03:39.261 { 00:03:39.261 "method": "bdev_raid_set_options", 00:03:39.261 "params": { 00:03:39.261 "process_window_size_kb": 1024 00:03:39.261 } 00:03:39.261 }, 00:03:39.261 { 00:03:39.261 "method": "bdev_iscsi_set_options", 00:03:39.261 "params": { 00:03:39.261 "timeout_sec": 30 00:03:39.261 } 00:03:39.261 }, 00:03:39.261 { 00:03:39.261 "method": "bdev_nvme_set_options", 00:03:39.261 "params": { 00:03:39.261 "action_on_timeout": "none", 00:03:39.261 "timeout_us": 0, 00:03:39.261 "timeout_admin_us": 0, 00:03:39.261 "keep_alive_timeout_ms": 10000, 00:03:39.261 "arbitration_burst": 0, 00:03:39.261 "low_priority_weight": 0, 00:03:39.261 "medium_priority_weight": 0, 00:03:39.261 "high_priority_weight": 0, 00:03:39.261 "nvme_adminq_poll_period_us": 10000, 00:03:39.261 "nvme_ioq_poll_period_us": 0, 00:03:39.261 "io_queue_requests": 0, 00:03:39.261 "delay_cmd_submit": true, 00:03:39.261 "transport_retry_count": 4, 00:03:39.261 "bdev_retry_count": 3, 00:03:39.261 "transport_ack_timeout": 0, 00:03:39.261 "ctrlr_loss_timeout_sec": 0, 00:03:39.261 "reconnect_delay_sec": 0, 00:03:39.261 "fast_io_fail_timeout_sec": 0, 00:03:39.261 "disable_auto_failback": false, 00:03:39.261 "generate_uuids": false, 00:03:39.261 "transport_tos": 0, 00:03:39.261 "nvme_error_stat": false, 00:03:39.261 "rdma_srq_size": 0, 00:03:39.261 "io_path_stat": false, 00:03:39.261 "allow_accel_sequence": false, 00:03:39.261 "rdma_max_cq_size": 0, 00:03:39.261 "rdma_cm_event_timeout_ms": 0, 00:03:39.261 "dhchap_digests": [ 00:03:39.261 "sha256", 00:03:39.261 "sha384", 00:03:39.261 "sha512" 00:03:39.261 ], 00:03:39.261 "dhchap_dhgroups": [ 00:03:39.261 "null", 00:03:39.261 "ffdhe2048", 00:03:39.261 "ffdhe3072", 00:03:39.261 "ffdhe4096", 00:03:39.261 "ffdhe6144", 00:03:39.261 "ffdhe8192" 00:03:39.261 ] 00:03:39.261 } 00:03:39.261 }, 00:03:39.261 { 00:03:39.261 "method": "bdev_nvme_set_hotplug", 00:03:39.261 "params": { 00:03:39.261 "period_us": 100000, 00:03:39.261 "enable": false 00:03:39.261 } 00:03:39.261 }, 00:03:39.261 { 00:03:39.261 "method": "bdev_wait_for_examine" 00:03:39.261 } 00:03:39.261 ] 00:03:39.261 }, 00:03:39.261 { 00:03:39.261 "subsystem": "scsi", 00:03:39.261 "config": null 00:03:39.261 }, 00:03:39.261 { 00:03:39.261 "subsystem": "scheduler", 00:03:39.261 "config": [ 00:03:39.261 { 00:03:39.261 "method": "framework_set_scheduler", 00:03:39.261 "params": { 00:03:39.261 "name": "static" 00:03:39.261 } 00:03:39.261 } 00:03:39.261 ] 00:03:39.261 }, 00:03:39.261 { 00:03:39.261 "subsystem": "vhost_scsi", 00:03:39.261 "config": [] 00:03:39.261 }, 00:03:39.261 { 00:03:39.261 "subsystem": "vhost_blk", 00:03:39.261 "config": [] 00:03:39.261 }, 00:03:39.261 { 00:03:39.261 "subsystem": "ublk", 00:03:39.261 "config": [] 00:03:39.261 }, 00:03:39.261 { 00:03:39.261 "subsystem": "nbd", 00:03:39.261 "config": [] 00:03:39.261 }, 00:03:39.261 { 00:03:39.261 "subsystem": "nvmf", 00:03:39.261 "config": [ 00:03:39.261 { 00:03:39.261 "method": "nvmf_set_config", 00:03:39.261 "params": { 00:03:39.261 "discovery_filter": "match_any", 00:03:39.261 "admin_cmd_passthru": { 00:03:39.261 "identify_ctrlr": false 00:03:39.261 } 00:03:39.261 } 00:03:39.261 }, 00:03:39.261 { 00:03:39.261 "method": "nvmf_set_max_subsystems", 00:03:39.261 "params": { 00:03:39.261 "max_subsystems": 1024 00:03:39.261 } 00:03:39.261 }, 00:03:39.261 { 00:03:39.261 "method": "nvmf_set_crdt", 00:03:39.261 "params": { 00:03:39.261 "crdt1": 0, 00:03:39.261 "crdt2": 0, 00:03:39.261 "crdt3": 0 00:03:39.261 } 00:03:39.261 }, 00:03:39.261 { 00:03:39.261 "method": "nvmf_create_transport", 00:03:39.261 "params": { 00:03:39.261 "trtype": "TCP", 00:03:39.261 "max_queue_depth": 128, 00:03:39.261 "max_io_qpairs_per_ctrlr": 127, 00:03:39.261 "in_capsule_data_size": 4096, 00:03:39.261 "max_io_size": 131072, 00:03:39.261 "io_unit_size": 131072, 00:03:39.261 "max_aq_depth": 128, 00:03:39.261 "num_shared_buffers": 511, 00:03:39.261 "buf_cache_size": 4294967295, 00:03:39.261 "dif_insert_or_strip": false, 00:03:39.261 "zcopy": false, 00:03:39.261 "c2h_success": true, 00:03:39.261 "sock_priority": 0, 00:03:39.261 "abort_timeout_sec": 1, 00:03:39.261 "ack_timeout": 0, 00:03:39.261 "data_wr_pool_size": 0 00:03:39.261 } 00:03:39.261 } 00:03:39.261 ] 00:03:39.261 }, 00:03:39.261 { 00:03:39.261 "subsystem": "iscsi", 00:03:39.261 "config": [ 00:03:39.261 { 00:03:39.261 "method": "iscsi_set_options", 00:03:39.261 "params": { 00:03:39.261 "node_base": "iqn.2016-06.io.spdk", 00:03:39.261 "max_sessions": 128, 00:03:39.261 "max_connections_per_session": 2, 00:03:39.261 "max_queue_depth": 64, 00:03:39.261 "default_time2wait": 2, 00:03:39.261 "default_time2retain": 20, 00:03:39.261 "first_burst_length": 8192, 00:03:39.261 "immediate_data": true, 00:03:39.261 "allow_duplicated_isid": false, 00:03:39.261 "error_recovery_level": 0, 00:03:39.261 "nop_timeout": 60, 00:03:39.261 "nop_in_interval": 30, 00:03:39.261 "disable_chap": false, 00:03:39.261 "require_chap": false, 00:03:39.261 "mutual_chap": false, 00:03:39.261 "chap_group": 0, 00:03:39.261 "max_large_datain_per_connection": 64, 00:03:39.261 "max_r2t_per_connection": 4, 00:03:39.261 "pdu_pool_size": 36864, 00:03:39.261 "immediate_data_pool_size": 16384, 00:03:39.261 "data_out_pool_size": 2048 00:03:39.261 } 00:03:39.261 } 00:03:39.261 ] 00:03:39.261 } 00:03:39.261 ] 00:03:39.261 } 00:03:39.261 11:08:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:39.261 11:08:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 450095 00:03:39.261 11:08:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 450095 ']' 00:03:39.261 11:08:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 450095 00:03:39.261 11:08:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:03:39.261 11:08:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:39.261 11:08:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 450095 00:03:39.261 11:08:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:39.261 11:08:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:39.261 11:08:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 450095' 00:03:39.261 killing process with pid 450095 00:03:39.261 11:08:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 450095 00:03:39.261 11:08:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 450095 00:03:39.826 11:08:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=450191 00:03:39.826 11:08:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:39.826 11:08:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:45.087 11:08:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 450191 00:03:45.087 11:08:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 450191 ']' 00:03:45.087 11:08:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 450191 00:03:45.087 11:08:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:03:45.087 11:08:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:45.087 11:08:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 450191 00:03:45.087 11:08:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:45.087 11:08:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:45.087 11:08:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 450191' 00:03:45.087 killing process with pid 450191 00:03:45.087 11:08:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 450191 00:03:45.087 11:08:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 450191 00:03:45.343 11:08:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:45.343 11:08:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:45.343 00:03:45.344 real 0m6.552s 00:03:45.344 user 0m6.167s 00:03:45.344 sys 0m0.666s 00:03:45.344 11:08:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.344 11:08:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:45.344 ************************************ 00:03:45.344 END TEST skip_rpc_with_json 00:03:45.344 ************************************ 00:03:45.344 11:08:11 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:45.344 11:08:11 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:45.344 11:08:11 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:45.344 11:08:11 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.344 11:08:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:45.344 ************************************ 00:03:45.344 START TEST skip_rpc_with_delay 00:03:45.344 ************************************ 00:03:45.344 11:08:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:03:45.344 11:08:11 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:45.344 11:08:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:03:45.344 11:08:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:45.344 11:08:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:45.344 11:08:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:45.344 11:08:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:45.344 11:08:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:45.344 11:08:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:45.344 11:08:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:45.344 11:08:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:45.344 11:08:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:45.344 11:08:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:45.344 [2024-07-12 11:08:11.386004] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:45.344 [2024-07-12 11:08:11.386107] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:03:45.344 11:08:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:03:45.344 11:08:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:45.344 11:08:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:03:45.344 11:08:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:45.344 00:03:45.344 real 0m0.071s 00:03:45.344 user 0m0.047s 00:03:45.344 sys 0m0.023s 00:03:45.344 11:08:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.344 11:08:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:45.344 ************************************ 00:03:45.344 END TEST skip_rpc_with_delay 00:03:45.344 ************************************ 00:03:45.344 11:08:11 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:45.344 11:08:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:45.344 11:08:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:45.344 11:08:11 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:45.344 11:08:11 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:45.344 11:08:11 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.344 11:08:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:45.344 ************************************ 00:03:45.344 START TEST exit_on_failed_rpc_init 00:03:45.344 ************************************ 00:03:45.344 11:08:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:03:45.344 11:08:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=450847 00:03:45.344 11:08:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:45.344 11:08:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 450847 00:03:45.344 11:08:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 450847 ']' 00:03:45.344 11:08:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:45.344 11:08:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:45.344 11:08:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:45.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:45.344 11:08:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:45.344 11:08:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:45.601 [2024-07-12 11:08:11.501496] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:03:45.601 [2024-07-12 11:08:11.501588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid450847 ] 00:03:45.601 EAL: No free 2048 kB hugepages reported on node 1 00:03:45.601 [2024-07-12 11:08:11.560254] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:45.601 [2024-07-12 11:08:11.670665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:45.858 11:08:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:45.858 11:08:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:03:45.858 11:08:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:45.858 11:08:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:45.858 11:08:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:03:45.858 11:08:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:45.858 11:08:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:45.858 11:08:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:45.858 11:08:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:45.858 11:08:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:45.858 11:08:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:45.858 11:08:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:45.858 11:08:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:45.858 11:08:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:45.858 11:08:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:45.858 [2024-07-12 11:08:11.970071] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:03:45.858 [2024-07-12 11:08:11.970166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid450965 ] 00:03:46.115 EAL: No free 2048 kB hugepages reported on node 1 00:03:46.115 [2024-07-12 11:08:12.028098] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:46.115 [2024-07-12 11:08:12.135775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:03:46.115 [2024-07-12 11:08:12.135895] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:46.115 [2024-07-12 11:08:12.135943] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:46.115 [2024-07-12 11:08:12.135956] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:46.373 11:08:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:03:46.373 11:08:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:46.373 11:08:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:03:46.373 11:08:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:03:46.373 11:08:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:03:46.373 11:08:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:46.373 11:08:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:46.373 11:08:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 450847 00:03:46.373 11:08:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 450847 ']' 00:03:46.373 11:08:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 450847 00:03:46.373 11:08:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:03:46.373 11:08:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:46.373 11:08:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 450847 00:03:46.373 11:08:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:46.373 11:08:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:46.373 11:08:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 450847' 00:03:46.373 killing process with pid 450847 00:03:46.373 11:08:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 450847 00:03:46.373 11:08:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 450847 00:03:46.632 00:03:46.632 real 0m1.267s 00:03:46.632 user 0m1.450s 00:03:46.632 sys 0m0.409s 00:03:46.632 11:08:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:46.632 11:08:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:46.632 ************************************ 00:03:46.632 END TEST exit_on_failed_rpc_init 00:03:46.632 ************************************ 00:03:46.632 11:08:12 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:46.632 11:08:12 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:46.632 00:03:46.632 real 0m13.599s 00:03:46.632 user 0m12.934s 00:03:46.632 sys 0m1.554s 00:03:46.632 11:08:12 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:46.632 11:08:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:46.632 ************************************ 00:03:46.632 END TEST skip_rpc 00:03:46.632 ************************************ 00:03:46.892 11:08:12 -- common/autotest_common.sh@1142 -- # return 0 00:03:46.892 11:08:12 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:46.892 11:08:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:46.892 11:08:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.892 11:08:12 -- common/autotest_common.sh@10 -- # set +x 00:03:46.892 ************************************ 00:03:46.892 START TEST rpc_client 00:03:46.892 ************************************ 00:03:46.892 11:08:12 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:46.892 * Looking for test storage... 00:03:46.892 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:46.892 11:08:12 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:46.892 OK 00:03:46.892 11:08:12 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:46.892 00:03:46.892 real 0m0.072s 00:03:46.892 user 0m0.027s 00:03:46.892 sys 0m0.051s 00:03:46.892 11:08:12 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:46.892 11:08:12 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:46.892 ************************************ 00:03:46.892 END TEST rpc_client 00:03:46.892 ************************************ 00:03:46.892 11:08:12 -- common/autotest_common.sh@1142 -- # return 0 00:03:46.892 11:08:12 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:46.892 11:08:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:46.892 11:08:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.892 11:08:12 -- common/autotest_common.sh@10 -- # set +x 00:03:46.892 ************************************ 00:03:46.892 START TEST json_config 00:03:46.892 ************************************ 00:03:46.892 11:08:12 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:46.892 11:08:12 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:46.892 11:08:12 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:46.892 11:08:12 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:46.892 11:08:12 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:46.892 11:08:12 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:46.892 11:08:12 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:46.892 11:08:12 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:46.892 11:08:12 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:46.892 11:08:12 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:46.892 11:08:12 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:46.892 11:08:12 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:46.892 11:08:12 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:46.892 11:08:12 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:46.892 11:08:12 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:46.892 11:08:12 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:46.892 11:08:12 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:46.892 11:08:12 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:46.892 11:08:12 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:46.892 11:08:12 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:46.892 11:08:12 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:46.892 11:08:12 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:46.892 11:08:12 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:46.892 11:08:12 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.892 11:08:12 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.892 11:08:12 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.892 11:08:12 json_config -- paths/export.sh@5 -- # export PATH 00:03:46.892 11:08:12 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.892 11:08:12 json_config -- nvmf/common.sh@47 -- # : 0 00:03:46.892 11:08:12 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:46.892 11:08:12 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:46.892 11:08:12 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:46.892 11:08:12 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:46.892 11:08:12 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:46.892 11:08:12 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:46.892 11:08:12 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:46.892 11:08:12 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:46.892 11:08:12 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:46.892 11:08:12 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:46.892 11:08:12 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:46.892 11:08:12 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:46.892 11:08:12 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:46.892 11:08:12 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:46.892 11:08:12 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:46.892 11:08:12 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:46.892 11:08:12 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:46.892 11:08:12 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:46.892 11:08:12 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:46.892 11:08:12 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:46.892 11:08:12 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:46.892 11:08:12 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:46.892 11:08:12 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:46.892 11:08:12 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:03:46.892 INFO: JSON configuration test init 00:03:46.892 11:08:12 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:03:46.892 11:08:12 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:03:46.892 11:08:12 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:46.892 11:08:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:46.892 11:08:12 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:03:46.892 11:08:12 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:46.892 11:08:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:46.892 11:08:12 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:03:46.892 11:08:12 json_config -- json_config/common.sh@9 -- # local app=target 00:03:46.892 11:08:12 json_config -- json_config/common.sh@10 -- # shift 00:03:46.892 11:08:12 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:46.892 11:08:12 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:46.892 11:08:12 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:46.893 11:08:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:46.893 11:08:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:46.893 11:08:12 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=451207 00:03:46.893 11:08:12 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:46.893 11:08:12 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:46.893 Waiting for target to run... 00:03:46.893 11:08:12 json_config -- json_config/common.sh@25 -- # waitforlisten 451207 /var/tmp/spdk_tgt.sock 00:03:46.893 11:08:12 json_config -- common/autotest_common.sh@829 -- # '[' -z 451207 ']' 00:03:46.893 11:08:12 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:46.893 11:08:12 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:46.893 11:08:12 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:46.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:46.893 11:08:12 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:46.893 11:08:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:46.893 [2024-07-12 11:08:13.020633] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:03:46.893 [2024-07-12 11:08:13.020729] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid451207 ] 00:03:47.150 EAL: No free 2048 kB hugepages reported on node 1 00:03:47.407 [2024-07-12 11:08:13.350231] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:47.407 [2024-07-12 11:08:13.429312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:47.973 11:08:13 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:47.973 11:08:13 json_config -- common/autotest_common.sh@862 -- # return 0 00:03:47.973 11:08:13 json_config -- json_config/common.sh@26 -- # echo '' 00:03:47.973 00:03:47.973 11:08:13 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:03:47.973 11:08:13 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:03:47.973 11:08:13 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:47.973 11:08:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.973 11:08:13 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:03:47.973 11:08:13 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:03:47.973 11:08:13 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:47.973 11:08:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.973 11:08:13 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:47.973 11:08:13 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:03:47.973 11:08:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:51.285 11:08:17 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:03:51.285 11:08:17 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:51.285 11:08:17 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:51.285 11:08:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:51.285 11:08:17 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:51.285 11:08:17 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:51.285 11:08:17 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:51.285 11:08:17 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:03:51.285 11:08:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:51.285 11:08:17 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:03:51.285 11:08:17 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:03:51.285 11:08:17 json_config -- json_config/json_config.sh@48 -- # local get_types 00:03:51.285 11:08:17 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:03:51.285 11:08:17 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:03:51.285 11:08:17 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:51.285 11:08:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:51.285 11:08:17 json_config -- json_config/json_config.sh@55 -- # return 0 00:03:51.285 11:08:17 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:03:51.285 11:08:17 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:03:51.285 11:08:17 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:03:51.285 11:08:17 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:03:51.285 11:08:17 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:03:51.285 11:08:17 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:03:51.285 11:08:17 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:51.285 11:08:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:51.285 11:08:17 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:51.285 11:08:17 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:03:51.285 11:08:17 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:03:51.285 11:08:17 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:51.285 11:08:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:51.546 MallocForNvmf0 00:03:51.546 11:08:17 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:51.546 11:08:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:51.804 MallocForNvmf1 00:03:51.804 11:08:17 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:51.804 11:08:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:52.062 [2024-07-12 11:08:18.098264] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:52.062 11:08:18 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:52.062 11:08:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:52.320 11:08:18 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:52.320 11:08:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:52.577 11:08:18 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:52.577 11:08:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:52.835 11:08:18 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:52.835 11:08:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:53.092 [2024-07-12 11:08:19.093415] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:53.092 11:08:19 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:03:53.092 11:08:19 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:53.093 11:08:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:53.093 11:08:19 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:03:53.093 11:08:19 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:53.093 11:08:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:53.093 11:08:19 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:03:53.093 11:08:19 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:53.093 11:08:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:53.351 MallocBdevForConfigChangeCheck 00:03:53.351 11:08:19 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:03:53.351 11:08:19 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:53.351 11:08:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:53.351 11:08:19 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:03:53.351 11:08:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:53.917 11:08:19 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:03:53.917 INFO: shutting down applications... 00:03:53.917 11:08:19 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:03:53.917 11:08:19 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:03:53.917 11:08:19 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:03:53.917 11:08:19 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:55.290 Calling clear_iscsi_subsystem 00:03:55.290 Calling clear_nvmf_subsystem 00:03:55.290 Calling clear_nbd_subsystem 00:03:55.290 Calling clear_ublk_subsystem 00:03:55.290 Calling clear_vhost_blk_subsystem 00:03:55.290 Calling clear_vhost_scsi_subsystem 00:03:55.290 Calling clear_bdev_subsystem 00:03:55.548 11:08:21 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:03:55.548 11:08:21 json_config -- json_config/json_config.sh@343 -- # count=100 00:03:55.548 11:08:21 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:03:55.548 11:08:21 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:55.548 11:08:21 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:55.548 11:08:21 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:55.806 11:08:21 json_config -- json_config/json_config.sh@345 -- # break 00:03:55.806 11:08:21 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:03:55.806 11:08:21 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:03:55.806 11:08:21 json_config -- json_config/common.sh@31 -- # local app=target 00:03:55.806 11:08:21 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:55.806 11:08:21 json_config -- json_config/common.sh@35 -- # [[ -n 451207 ]] 00:03:55.806 11:08:21 json_config -- json_config/common.sh@38 -- # kill -SIGINT 451207 00:03:55.806 11:08:21 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:55.806 11:08:21 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:55.806 11:08:21 json_config -- json_config/common.sh@41 -- # kill -0 451207 00:03:55.806 11:08:21 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:03:56.374 11:08:22 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:03:56.374 11:08:22 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:56.374 11:08:22 json_config -- json_config/common.sh@41 -- # kill -0 451207 00:03:56.374 11:08:22 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:56.374 11:08:22 json_config -- json_config/common.sh@43 -- # break 00:03:56.374 11:08:22 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:56.374 11:08:22 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:56.374 SPDK target shutdown done 00:03:56.374 11:08:22 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:03:56.374 INFO: relaunching applications... 00:03:56.374 11:08:22 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:56.374 11:08:22 json_config -- json_config/common.sh@9 -- # local app=target 00:03:56.374 11:08:22 json_config -- json_config/common.sh@10 -- # shift 00:03:56.374 11:08:22 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:56.374 11:08:22 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:56.374 11:08:22 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:56.374 11:08:22 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:56.374 11:08:22 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:56.374 11:08:22 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=452404 00:03:56.374 11:08:22 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:56.374 11:08:22 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:56.374 Waiting for target to run... 00:03:56.374 11:08:22 json_config -- json_config/common.sh@25 -- # waitforlisten 452404 /var/tmp/spdk_tgt.sock 00:03:56.374 11:08:22 json_config -- common/autotest_common.sh@829 -- # '[' -z 452404 ']' 00:03:56.374 11:08:22 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:56.374 11:08:22 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:56.374 11:08:22 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:56.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:56.374 11:08:22 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:56.374 11:08:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.374 [2024-07-12 11:08:22.368186] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:03:56.375 [2024-07-12 11:08:22.368282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid452404 ] 00:03:56.375 EAL: No free 2048 kB hugepages reported on node 1 00:03:56.632 [2024-07-12 11:08:22.718666] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:56.892 [2024-07-12 11:08:22.799041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.174 [2024-07-12 11:08:25.827631] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:00.174 [2024-07-12 11:08:25.860064] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:00.174 11:08:25 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:00.174 11:08:25 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:00.174 11:08:25 json_config -- json_config/common.sh@26 -- # echo '' 00:04:00.174 00:04:00.174 11:08:25 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:00.174 11:08:25 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:00.174 INFO: Checking if target configuration is the same... 00:04:00.174 11:08:25 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:00.174 11:08:25 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:00.174 11:08:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:00.174 + '[' 2 -ne 2 ']' 00:04:00.174 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:00.174 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:00.174 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:00.174 +++ basename /dev/fd/62 00:04:00.174 ++ mktemp /tmp/62.XXX 00:04:00.174 + tmp_file_1=/tmp/62.OIe 00:04:00.174 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:00.174 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:00.174 + tmp_file_2=/tmp/spdk_tgt_config.json.lnq 00:04:00.174 + ret=0 00:04:00.174 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:00.174 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:00.432 + diff -u /tmp/62.OIe /tmp/spdk_tgt_config.json.lnq 00:04:00.432 + echo 'INFO: JSON config files are the same' 00:04:00.432 INFO: JSON config files are the same 00:04:00.432 + rm /tmp/62.OIe /tmp/spdk_tgt_config.json.lnq 00:04:00.432 + exit 0 00:04:00.432 11:08:26 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:00.432 11:08:26 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:00.432 INFO: changing configuration and checking if this can be detected... 00:04:00.432 11:08:26 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:00.432 11:08:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:00.690 11:08:26 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:00.690 11:08:26 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:00.690 11:08:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:00.690 + '[' 2 -ne 2 ']' 00:04:00.690 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:00.690 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:00.690 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:00.690 +++ basename /dev/fd/62 00:04:00.690 ++ mktemp /tmp/62.XXX 00:04:00.690 + tmp_file_1=/tmp/62.5sP 00:04:00.690 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:00.690 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:00.690 + tmp_file_2=/tmp/spdk_tgt_config.json.vPy 00:04:00.690 + ret=0 00:04:00.690 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:00.948 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:00.948 + diff -u /tmp/62.5sP /tmp/spdk_tgt_config.json.vPy 00:04:00.948 + ret=1 00:04:00.948 + echo '=== Start of file: /tmp/62.5sP ===' 00:04:00.948 + cat /tmp/62.5sP 00:04:00.948 + echo '=== End of file: /tmp/62.5sP ===' 00:04:00.948 + echo '' 00:04:00.948 + echo '=== Start of file: /tmp/spdk_tgt_config.json.vPy ===' 00:04:00.948 + cat /tmp/spdk_tgt_config.json.vPy 00:04:00.948 + echo '=== End of file: /tmp/spdk_tgt_config.json.vPy ===' 00:04:00.948 + echo '' 00:04:00.948 + rm /tmp/62.5sP /tmp/spdk_tgt_config.json.vPy 00:04:00.948 + exit 1 00:04:00.948 11:08:26 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:00.948 INFO: configuration change detected. 00:04:00.948 11:08:26 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:00.948 11:08:26 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:00.948 11:08:26 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:00.948 11:08:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.948 11:08:26 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:00.948 11:08:26 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:00.948 11:08:26 json_config -- json_config/json_config.sh@317 -- # [[ -n 452404 ]] 00:04:00.948 11:08:26 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:00.948 11:08:26 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:00.948 11:08:26 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:00.948 11:08:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.948 11:08:27 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:00.948 11:08:27 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:00.948 11:08:27 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:00.948 11:08:27 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:00.948 11:08:27 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:00.948 11:08:27 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:00.948 11:08:27 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:00.948 11:08:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.948 11:08:27 json_config -- json_config/json_config.sh@323 -- # killprocess 452404 00:04:00.948 11:08:27 json_config -- common/autotest_common.sh@948 -- # '[' -z 452404 ']' 00:04:00.948 11:08:27 json_config -- common/autotest_common.sh@952 -- # kill -0 452404 00:04:00.948 11:08:27 json_config -- common/autotest_common.sh@953 -- # uname 00:04:00.948 11:08:27 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:00.948 11:08:27 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 452404 00:04:00.948 11:08:27 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:00.948 11:08:27 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:00.948 11:08:27 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 452404' 00:04:00.948 killing process with pid 452404 00:04:00.948 11:08:27 json_config -- common/autotest_common.sh@967 -- # kill 452404 00:04:00.948 11:08:27 json_config -- common/autotest_common.sh@972 -- # wait 452404 00:04:02.846 11:08:28 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:02.846 11:08:28 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:02.846 11:08:28 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:02.846 11:08:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:02.846 11:08:28 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:02.846 11:08:28 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:02.846 INFO: Success 00:04:02.846 00:04:02.846 real 0m15.762s 00:04:02.846 user 0m17.682s 00:04:02.846 sys 0m1.801s 00:04:02.846 11:08:28 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.846 11:08:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:02.846 ************************************ 00:04:02.846 END TEST json_config 00:04:02.846 ************************************ 00:04:02.846 11:08:28 -- common/autotest_common.sh@1142 -- # return 0 00:04:02.846 11:08:28 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:02.846 11:08:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:02.846 11:08:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.846 11:08:28 -- common/autotest_common.sh@10 -- # set +x 00:04:02.846 ************************************ 00:04:02.846 START TEST json_config_extra_key 00:04:02.846 ************************************ 00:04:02.846 11:08:28 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:02.846 11:08:28 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:02.846 11:08:28 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:02.846 11:08:28 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:02.846 11:08:28 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:02.846 11:08:28 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:02.846 11:08:28 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:02.846 11:08:28 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:02.846 11:08:28 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:02.846 11:08:28 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:02.846 11:08:28 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:02.846 11:08:28 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:02.846 11:08:28 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:02.847 11:08:28 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:02.847 11:08:28 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:02.847 11:08:28 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:02.847 11:08:28 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:02.847 11:08:28 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:02.847 11:08:28 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:02.847 11:08:28 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:02.847 11:08:28 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:02.847 11:08:28 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:02.847 11:08:28 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:02.847 11:08:28 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.847 11:08:28 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.847 11:08:28 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.847 11:08:28 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:02.847 11:08:28 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.847 11:08:28 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:02.847 11:08:28 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:02.847 11:08:28 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:02.847 11:08:28 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:02.847 11:08:28 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:02.847 11:08:28 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:02.847 11:08:28 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:02.847 11:08:28 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:02.847 11:08:28 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:02.847 11:08:28 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:02.847 11:08:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:02.847 11:08:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:02.847 11:08:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:02.847 11:08:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:02.847 11:08:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:02.847 11:08:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:02.847 11:08:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:02.847 11:08:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:02.847 11:08:28 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:02.847 11:08:28 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:02.847 INFO: launching applications... 00:04:02.847 11:08:28 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:02.847 11:08:28 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:02.847 11:08:28 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:02.847 11:08:28 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:02.847 11:08:28 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:02.847 11:08:28 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:02.847 11:08:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:02.847 11:08:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:02.847 11:08:28 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=453318 00:04:02.847 11:08:28 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:02.847 11:08:28 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:02.847 Waiting for target to run... 00:04:02.847 11:08:28 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 453318 /var/tmp/spdk_tgt.sock 00:04:02.847 11:08:28 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 453318 ']' 00:04:02.847 11:08:28 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:02.847 11:08:28 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:02.847 11:08:28 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:02.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:02.847 11:08:28 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:02.847 11:08:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:02.847 [2024-07-12 11:08:28.819458] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:04:02.847 [2024-07-12 11:08:28.819554] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid453318 ] 00:04:02.847 EAL: No free 2048 kB hugepages reported on node 1 00:04:03.106 [2024-07-12 11:08:29.155189] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.106 [2024-07-12 11:08:29.231321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.669 11:08:29 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:03.669 11:08:29 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:03.669 11:08:29 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:03.669 00:04:03.669 11:08:29 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:03.669 INFO: shutting down applications... 00:04:03.669 11:08:29 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:03.669 11:08:29 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:03.669 11:08:29 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:03.669 11:08:29 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 453318 ]] 00:04:03.669 11:08:29 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 453318 00:04:03.669 11:08:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:03.669 11:08:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:03.669 11:08:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 453318 00:04:03.669 11:08:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:04.234 11:08:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:04.234 11:08:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:04.234 11:08:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 453318 00:04:04.234 11:08:30 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:04.234 11:08:30 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:04.234 11:08:30 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:04.234 11:08:30 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:04.234 SPDK target shutdown done 00:04:04.234 11:08:30 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:04.234 Success 00:04:04.234 00:04:04.234 real 0m1.532s 00:04:04.234 user 0m1.530s 00:04:04.234 sys 0m0.409s 00:04:04.234 11:08:30 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:04.234 11:08:30 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:04.234 ************************************ 00:04:04.234 END TEST json_config_extra_key 00:04:04.234 ************************************ 00:04:04.234 11:08:30 -- common/autotest_common.sh@1142 -- # return 0 00:04:04.234 11:08:30 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:04.234 11:08:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.234 11:08:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.234 11:08:30 -- common/autotest_common.sh@10 -- # set +x 00:04:04.234 ************************************ 00:04:04.234 START TEST alias_rpc 00:04:04.234 ************************************ 00:04:04.234 11:08:30 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:04.234 * Looking for test storage... 00:04:04.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:04.234 11:08:30 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:04.234 11:08:30 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=453502 00:04:04.234 11:08:30 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:04.234 11:08:30 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 453502 00:04:04.234 11:08:30 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 453502 ']' 00:04:04.234 11:08:30 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:04.234 11:08:30 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:04.234 11:08:30 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:04.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:04.234 11:08:30 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:04.234 11:08:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.493 [2024-07-12 11:08:30.406964] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:04:04.493 [2024-07-12 11:08:30.407057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid453502 ] 00:04:04.493 EAL: No free 2048 kB hugepages reported on node 1 00:04:04.493 [2024-07-12 11:08:30.464339] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:04.493 [2024-07-12 11:08:30.572603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.751 11:08:30 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:04.751 11:08:30 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:04.751 11:08:30 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:05.008 11:08:31 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 453502 00:04:05.008 11:08:31 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 453502 ']' 00:04:05.008 11:08:31 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 453502 00:04:05.008 11:08:31 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:05.008 11:08:31 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:05.008 11:08:31 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 453502 00:04:05.008 11:08:31 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:05.008 11:08:31 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:05.008 11:08:31 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 453502' 00:04:05.008 killing process with pid 453502 00:04:05.008 11:08:31 alias_rpc -- common/autotest_common.sh@967 -- # kill 453502 00:04:05.008 11:08:31 alias_rpc -- common/autotest_common.sh@972 -- # wait 453502 00:04:05.574 00:04:05.574 real 0m1.256s 00:04:05.574 user 0m1.344s 00:04:05.574 sys 0m0.406s 00:04:05.574 11:08:31 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.574 11:08:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.574 ************************************ 00:04:05.574 END TEST alias_rpc 00:04:05.574 ************************************ 00:04:05.574 11:08:31 -- common/autotest_common.sh@1142 -- # return 0 00:04:05.574 11:08:31 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:05.574 11:08:31 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:05.574 11:08:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.574 11:08:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.574 11:08:31 -- common/autotest_common.sh@10 -- # set +x 00:04:05.574 ************************************ 00:04:05.574 START TEST spdkcli_tcp 00:04:05.574 ************************************ 00:04:05.574 11:08:31 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:05.574 * Looking for test storage... 00:04:05.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:05.574 11:08:31 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:05.574 11:08:31 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:05.574 11:08:31 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:05.574 11:08:31 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:05.574 11:08:31 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:05.574 11:08:31 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:05.574 11:08:31 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:05.574 11:08:31 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:05.574 11:08:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:05.574 11:08:31 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=453691 00:04:05.574 11:08:31 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:05.574 11:08:31 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 453691 00:04:05.574 11:08:31 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 453691 ']' 00:04:05.574 11:08:31 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:05.574 11:08:31 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:05.574 11:08:31 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:05.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:05.574 11:08:31 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:05.574 11:08:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:05.833 [2024-07-12 11:08:31.718835] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:04:05.833 [2024-07-12 11:08:31.718959] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid453691 ] 00:04:05.833 EAL: No free 2048 kB hugepages reported on node 1 00:04:05.833 [2024-07-12 11:08:31.778416] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:05.833 [2024-07-12 11:08:31.888331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:05.833 [2024-07-12 11:08:31.888335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.092 11:08:32 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:06.092 11:08:32 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:06.092 11:08:32 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=453822 00:04:06.092 11:08:32 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:06.092 11:08:32 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:06.351 [ 00:04:06.351 "bdev_malloc_delete", 00:04:06.351 "bdev_malloc_create", 00:04:06.351 "bdev_null_resize", 00:04:06.351 "bdev_null_delete", 00:04:06.351 "bdev_null_create", 00:04:06.351 "bdev_nvme_cuse_unregister", 00:04:06.351 "bdev_nvme_cuse_register", 00:04:06.351 "bdev_opal_new_user", 00:04:06.351 "bdev_opal_set_lock_state", 00:04:06.351 "bdev_opal_delete", 00:04:06.351 "bdev_opal_get_info", 00:04:06.351 "bdev_opal_create", 00:04:06.351 "bdev_nvme_opal_revert", 00:04:06.351 "bdev_nvme_opal_init", 00:04:06.351 "bdev_nvme_send_cmd", 00:04:06.351 "bdev_nvme_get_path_iostat", 00:04:06.351 "bdev_nvme_get_mdns_discovery_info", 00:04:06.351 "bdev_nvme_stop_mdns_discovery", 00:04:06.351 "bdev_nvme_start_mdns_discovery", 00:04:06.351 "bdev_nvme_set_multipath_policy", 00:04:06.351 "bdev_nvme_set_preferred_path", 00:04:06.351 "bdev_nvme_get_io_paths", 00:04:06.351 "bdev_nvme_remove_error_injection", 00:04:06.351 "bdev_nvme_add_error_injection", 00:04:06.351 "bdev_nvme_get_discovery_info", 00:04:06.351 "bdev_nvme_stop_discovery", 00:04:06.351 "bdev_nvme_start_discovery", 00:04:06.351 "bdev_nvme_get_controller_health_info", 00:04:06.351 "bdev_nvme_disable_controller", 00:04:06.351 "bdev_nvme_enable_controller", 00:04:06.351 "bdev_nvme_reset_controller", 00:04:06.351 "bdev_nvme_get_transport_statistics", 00:04:06.351 "bdev_nvme_apply_firmware", 00:04:06.351 "bdev_nvme_detach_controller", 00:04:06.351 "bdev_nvme_get_controllers", 00:04:06.351 "bdev_nvme_attach_controller", 00:04:06.351 "bdev_nvme_set_hotplug", 00:04:06.351 "bdev_nvme_set_options", 00:04:06.351 "bdev_passthru_delete", 00:04:06.351 "bdev_passthru_create", 00:04:06.351 "bdev_lvol_set_parent_bdev", 00:04:06.351 "bdev_lvol_set_parent", 00:04:06.351 "bdev_lvol_check_shallow_copy", 00:04:06.351 "bdev_lvol_start_shallow_copy", 00:04:06.351 "bdev_lvol_grow_lvstore", 00:04:06.351 "bdev_lvol_get_lvols", 00:04:06.351 "bdev_lvol_get_lvstores", 00:04:06.351 "bdev_lvol_delete", 00:04:06.351 "bdev_lvol_set_read_only", 00:04:06.351 "bdev_lvol_resize", 00:04:06.351 "bdev_lvol_decouple_parent", 00:04:06.351 "bdev_lvol_inflate", 00:04:06.351 "bdev_lvol_rename", 00:04:06.351 "bdev_lvol_clone_bdev", 00:04:06.351 "bdev_lvol_clone", 00:04:06.351 "bdev_lvol_snapshot", 00:04:06.351 "bdev_lvol_create", 00:04:06.351 "bdev_lvol_delete_lvstore", 00:04:06.351 "bdev_lvol_rename_lvstore", 00:04:06.351 "bdev_lvol_create_lvstore", 00:04:06.351 "bdev_raid_set_options", 00:04:06.351 "bdev_raid_remove_base_bdev", 00:04:06.351 "bdev_raid_add_base_bdev", 00:04:06.351 "bdev_raid_delete", 00:04:06.351 "bdev_raid_create", 00:04:06.351 "bdev_raid_get_bdevs", 00:04:06.351 "bdev_error_inject_error", 00:04:06.351 "bdev_error_delete", 00:04:06.351 "bdev_error_create", 00:04:06.351 "bdev_split_delete", 00:04:06.351 "bdev_split_create", 00:04:06.351 "bdev_delay_delete", 00:04:06.351 "bdev_delay_create", 00:04:06.351 "bdev_delay_update_latency", 00:04:06.351 "bdev_zone_block_delete", 00:04:06.351 "bdev_zone_block_create", 00:04:06.351 "blobfs_create", 00:04:06.351 "blobfs_detect", 00:04:06.351 "blobfs_set_cache_size", 00:04:06.351 "bdev_aio_delete", 00:04:06.351 "bdev_aio_rescan", 00:04:06.351 "bdev_aio_create", 00:04:06.351 "bdev_ftl_set_property", 00:04:06.351 "bdev_ftl_get_properties", 00:04:06.351 "bdev_ftl_get_stats", 00:04:06.351 "bdev_ftl_unmap", 00:04:06.351 "bdev_ftl_unload", 00:04:06.351 "bdev_ftl_delete", 00:04:06.351 "bdev_ftl_load", 00:04:06.351 "bdev_ftl_create", 00:04:06.351 "bdev_virtio_attach_controller", 00:04:06.351 "bdev_virtio_scsi_get_devices", 00:04:06.351 "bdev_virtio_detach_controller", 00:04:06.351 "bdev_virtio_blk_set_hotplug", 00:04:06.351 "bdev_iscsi_delete", 00:04:06.351 "bdev_iscsi_create", 00:04:06.351 "bdev_iscsi_set_options", 00:04:06.351 "accel_error_inject_error", 00:04:06.351 "ioat_scan_accel_module", 00:04:06.351 "dsa_scan_accel_module", 00:04:06.351 "iaa_scan_accel_module", 00:04:06.351 "vfu_virtio_create_scsi_endpoint", 00:04:06.351 "vfu_virtio_scsi_remove_target", 00:04:06.351 "vfu_virtio_scsi_add_target", 00:04:06.351 "vfu_virtio_create_blk_endpoint", 00:04:06.351 "vfu_virtio_delete_endpoint", 00:04:06.351 "keyring_file_remove_key", 00:04:06.351 "keyring_file_add_key", 00:04:06.351 "keyring_linux_set_options", 00:04:06.351 "iscsi_get_histogram", 00:04:06.351 "iscsi_enable_histogram", 00:04:06.351 "iscsi_set_options", 00:04:06.351 "iscsi_get_auth_groups", 00:04:06.351 "iscsi_auth_group_remove_secret", 00:04:06.351 "iscsi_auth_group_add_secret", 00:04:06.351 "iscsi_delete_auth_group", 00:04:06.351 "iscsi_create_auth_group", 00:04:06.351 "iscsi_set_discovery_auth", 00:04:06.351 "iscsi_get_options", 00:04:06.351 "iscsi_target_node_request_logout", 00:04:06.351 "iscsi_target_node_set_redirect", 00:04:06.351 "iscsi_target_node_set_auth", 00:04:06.351 "iscsi_target_node_add_lun", 00:04:06.351 "iscsi_get_stats", 00:04:06.351 "iscsi_get_connections", 00:04:06.351 "iscsi_portal_group_set_auth", 00:04:06.351 "iscsi_start_portal_group", 00:04:06.351 "iscsi_delete_portal_group", 00:04:06.351 "iscsi_create_portal_group", 00:04:06.351 "iscsi_get_portal_groups", 00:04:06.351 "iscsi_delete_target_node", 00:04:06.351 "iscsi_target_node_remove_pg_ig_maps", 00:04:06.351 "iscsi_target_node_add_pg_ig_maps", 00:04:06.351 "iscsi_create_target_node", 00:04:06.351 "iscsi_get_target_nodes", 00:04:06.351 "iscsi_delete_initiator_group", 00:04:06.351 "iscsi_initiator_group_remove_initiators", 00:04:06.351 "iscsi_initiator_group_add_initiators", 00:04:06.351 "iscsi_create_initiator_group", 00:04:06.351 "iscsi_get_initiator_groups", 00:04:06.351 "nvmf_set_crdt", 00:04:06.351 "nvmf_set_config", 00:04:06.351 "nvmf_set_max_subsystems", 00:04:06.351 "nvmf_stop_mdns_prr", 00:04:06.351 "nvmf_publish_mdns_prr", 00:04:06.351 "nvmf_subsystem_get_listeners", 00:04:06.351 "nvmf_subsystem_get_qpairs", 00:04:06.351 "nvmf_subsystem_get_controllers", 00:04:06.351 "nvmf_get_stats", 00:04:06.351 "nvmf_get_transports", 00:04:06.351 "nvmf_create_transport", 00:04:06.351 "nvmf_get_targets", 00:04:06.351 "nvmf_delete_target", 00:04:06.351 "nvmf_create_target", 00:04:06.351 "nvmf_subsystem_allow_any_host", 00:04:06.351 "nvmf_subsystem_remove_host", 00:04:06.351 "nvmf_subsystem_add_host", 00:04:06.351 "nvmf_ns_remove_host", 00:04:06.351 "nvmf_ns_add_host", 00:04:06.351 "nvmf_subsystem_remove_ns", 00:04:06.351 "nvmf_subsystem_add_ns", 00:04:06.351 "nvmf_subsystem_listener_set_ana_state", 00:04:06.351 "nvmf_discovery_get_referrals", 00:04:06.351 "nvmf_discovery_remove_referral", 00:04:06.351 "nvmf_discovery_add_referral", 00:04:06.351 "nvmf_subsystem_remove_listener", 00:04:06.351 "nvmf_subsystem_add_listener", 00:04:06.351 "nvmf_delete_subsystem", 00:04:06.351 "nvmf_create_subsystem", 00:04:06.351 "nvmf_get_subsystems", 00:04:06.351 "env_dpdk_get_mem_stats", 00:04:06.351 "nbd_get_disks", 00:04:06.351 "nbd_stop_disk", 00:04:06.351 "nbd_start_disk", 00:04:06.351 "ublk_recover_disk", 00:04:06.351 "ublk_get_disks", 00:04:06.351 "ublk_stop_disk", 00:04:06.351 "ublk_start_disk", 00:04:06.351 "ublk_destroy_target", 00:04:06.351 "ublk_create_target", 00:04:06.351 "virtio_blk_create_transport", 00:04:06.351 "virtio_blk_get_transports", 00:04:06.351 "vhost_controller_set_coalescing", 00:04:06.351 "vhost_get_controllers", 00:04:06.351 "vhost_delete_controller", 00:04:06.351 "vhost_create_blk_controller", 00:04:06.351 "vhost_scsi_controller_remove_target", 00:04:06.351 "vhost_scsi_controller_add_target", 00:04:06.351 "vhost_start_scsi_controller", 00:04:06.351 "vhost_create_scsi_controller", 00:04:06.351 "thread_set_cpumask", 00:04:06.351 "framework_get_governor", 00:04:06.351 "framework_get_scheduler", 00:04:06.351 "framework_set_scheduler", 00:04:06.351 "framework_get_reactors", 00:04:06.351 "thread_get_io_channels", 00:04:06.351 "thread_get_pollers", 00:04:06.351 "thread_get_stats", 00:04:06.351 "framework_monitor_context_switch", 00:04:06.351 "spdk_kill_instance", 00:04:06.351 "log_enable_timestamps", 00:04:06.351 "log_get_flags", 00:04:06.351 "log_clear_flag", 00:04:06.351 "log_set_flag", 00:04:06.351 "log_get_level", 00:04:06.351 "log_set_level", 00:04:06.351 "log_get_print_level", 00:04:06.351 "log_set_print_level", 00:04:06.351 "framework_enable_cpumask_locks", 00:04:06.351 "framework_disable_cpumask_locks", 00:04:06.351 "framework_wait_init", 00:04:06.352 "framework_start_init", 00:04:06.352 "scsi_get_devices", 00:04:06.352 "bdev_get_histogram", 00:04:06.352 "bdev_enable_histogram", 00:04:06.352 "bdev_set_qos_limit", 00:04:06.352 "bdev_set_qd_sampling_period", 00:04:06.352 "bdev_get_bdevs", 00:04:06.352 "bdev_reset_iostat", 00:04:06.352 "bdev_get_iostat", 00:04:06.352 "bdev_examine", 00:04:06.352 "bdev_wait_for_examine", 00:04:06.352 "bdev_set_options", 00:04:06.352 "notify_get_notifications", 00:04:06.352 "notify_get_types", 00:04:06.352 "accel_get_stats", 00:04:06.352 "accel_set_options", 00:04:06.352 "accel_set_driver", 00:04:06.352 "accel_crypto_key_destroy", 00:04:06.352 "accel_crypto_keys_get", 00:04:06.352 "accel_crypto_key_create", 00:04:06.352 "accel_assign_opc", 00:04:06.352 "accel_get_module_info", 00:04:06.352 "accel_get_opc_assignments", 00:04:06.352 "vmd_rescan", 00:04:06.352 "vmd_remove_device", 00:04:06.352 "vmd_enable", 00:04:06.352 "sock_get_default_impl", 00:04:06.352 "sock_set_default_impl", 00:04:06.352 "sock_impl_set_options", 00:04:06.352 "sock_impl_get_options", 00:04:06.352 "iobuf_get_stats", 00:04:06.352 "iobuf_set_options", 00:04:06.352 "keyring_get_keys", 00:04:06.352 "framework_get_pci_devices", 00:04:06.352 "framework_get_config", 00:04:06.352 "framework_get_subsystems", 00:04:06.352 "vfu_tgt_set_base_path", 00:04:06.352 "trace_get_info", 00:04:06.352 "trace_get_tpoint_group_mask", 00:04:06.352 "trace_disable_tpoint_group", 00:04:06.352 "trace_enable_tpoint_group", 00:04:06.352 "trace_clear_tpoint_mask", 00:04:06.352 "trace_set_tpoint_mask", 00:04:06.352 "spdk_get_version", 00:04:06.352 "rpc_get_methods" 00:04:06.352 ] 00:04:06.352 11:08:32 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:06.352 11:08:32 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:06.352 11:08:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:06.352 11:08:32 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:06.352 11:08:32 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 453691 00:04:06.352 11:08:32 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 453691 ']' 00:04:06.352 11:08:32 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 453691 00:04:06.352 11:08:32 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:06.352 11:08:32 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:06.352 11:08:32 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 453691 00:04:06.352 11:08:32 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:06.352 11:08:32 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:06.352 11:08:32 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 453691' 00:04:06.352 killing process with pid 453691 00:04:06.352 11:08:32 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 453691 00:04:06.352 11:08:32 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 453691 00:04:06.918 00:04:06.918 real 0m1.259s 00:04:06.918 user 0m2.211s 00:04:06.918 sys 0m0.434s 00:04:06.918 11:08:32 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.918 11:08:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:06.918 ************************************ 00:04:06.918 END TEST spdkcli_tcp 00:04:06.918 ************************************ 00:04:06.918 11:08:32 -- common/autotest_common.sh@1142 -- # return 0 00:04:06.918 11:08:32 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:06.918 11:08:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:06.918 11:08:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.918 11:08:32 -- common/autotest_common.sh@10 -- # set +x 00:04:06.918 ************************************ 00:04:06.918 START TEST dpdk_mem_utility 00:04:06.918 ************************************ 00:04:06.918 11:08:32 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:06.918 * Looking for test storage... 00:04:06.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:06.918 11:08:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:06.918 11:08:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=453979 00:04:06.918 11:08:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:06.918 11:08:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 453979 00:04:06.918 11:08:32 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 453979 ']' 00:04:06.918 11:08:32 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:06.918 11:08:32 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:06.918 11:08:32 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:06.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:06.918 11:08:32 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:06.918 11:08:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:06.918 [2024-07-12 11:08:33.023639] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:04:06.918 [2024-07-12 11:08:33.023733] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid453979 ] 00:04:07.176 EAL: No free 2048 kB hugepages reported on node 1 00:04:07.176 [2024-07-12 11:08:33.083802] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.176 [2024-07-12 11:08:33.192442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.434 11:08:33 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:07.434 11:08:33 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:07.434 11:08:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:07.434 11:08:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:07.434 11:08:33 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.434 11:08:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:07.434 { 00:04:07.434 "filename": "/tmp/spdk_mem_dump.txt" 00:04:07.434 } 00:04:07.434 11:08:33 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.434 11:08:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:07.434 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:07.434 1 heaps totaling size 814.000000 MiB 00:04:07.434 size: 814.000000 MiB heap id: 0 00:04:07.434 end heaps---------- 00:04:07.434 8 mempools totaling size 598.116089 MiB 00:04:07.434 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:07.434 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:07.434 size: 84.521057 MiB name: bdev_io_453979 00:04:07.434 size: 51.011292 MiB name: evtpool_453979 00:04:07.434 size: 50.003479 MiB name: msgpool_453979 00:04:07.434 size: 21.763794 MiB name: PDU_Pool 00:04:07.434 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:07.434 size: 0.026123 MiB name: Session_Pool 00:04:07.434 end mempools------- 00:04:07.434 6 memzones totaling size 4.142822 MiB 00:04:07.434 size: 1.000366 MiB name: RG_ring_0_453979 00:04:07.434 size: 1.000366 MiB name: RG_ring_1_453979 00:04:07.434 size: 1.000366 MiB name: RG_ring_4_453979 00:04:07.434 size: 1.000366 MiB name: RG_ring_5_453979 00:04:07.434 size: 0.125366 MiB name: RG_ring_2_453979 00:04:07.434 size: 0.015991 MiB name: RG_ring_3_453979 00:04:07.434 end memzones------- 00:04:07.434 11:08:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:07.434 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:07.434 list of free elements. size: 12.519348 MiB 00:04:07.434 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:07.434 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:07.434 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:07.434 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:07.434 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:07.434 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:07.434 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:07.434 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:07.434 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:07.434 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:07.434 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:07.434 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:07.435 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:07.435 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:07.435 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:07.435 list of standard malloc elements. size: 199.218079 MiB 00:04:07.435 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:07.435 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:07.435 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:07.435 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:07.435 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:07.435 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:07.435 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:07.435 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:07.435 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:07.435 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:07.435 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:07.435 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:07.435 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:07.435 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:07.435 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:07.435 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:07.435 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:07.435 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:07.435 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:07.435 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:07.435 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:07.435 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:07.435 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:07.435 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:07.435 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:07.435 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:07.435 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:07.435 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:07.435 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:07.435 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:07.435 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:07.435 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:07.435 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:07.435 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:07.435 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:07.435 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:07.435 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:07.435 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:07.435 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:07.435 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:07.435 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:07.435 list of memzone associated elements. size: 602.262573 MiB 00:04:07.435 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:07.435 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:07.435 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:07.435 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:07.435 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:07.435 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_453979_0 00:04:07.435 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:07.435 associated memzone info: size: 48.002930 MiB name: MP_evtpool_453979_0 00:04:07.435 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:07.435 associated memzone info: size: 48.002930 MiB name: MP_msgpool_453979_0 00:04:07.435 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:07.435 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:07.435 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:07.435 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:07.435 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:07.435 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_453979 00:04:07.435 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:07.435 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_453979 00:04:07.435 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:07.435 associated memzone info: size: 1.007996 MiB name: MP_evtpool_453979 00:04:07.435 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:07.435 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:07.435 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:07.435 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:07.435 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:07.435 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:07.435 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:07.435 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:07.435 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:07.435 associated memzone info: size: 1.000366 MiB name: RG_ring_0_453979 00:04:07.435 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:07.435 associated memzone info: size: 1.000366 MiB name: RG_ring_1_453979 00:04:07.435 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:07.435 associated memzone info: size: 1.000366 MiB name: RG_ring_4_453979 00:04:07.435 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:07.435 associated memzone info: size: 1.000366 MiB name: RG_ring_5_453979 00:04:07.435 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:07.435 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_453979 00:04:07.435 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:07.435 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:07.435 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:07.435 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:07.435 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:07.435 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:07.435 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:07.435 associated memzone info: size: 0.125366 MiB name: RG_ring_2_453979 00:04:07.435 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:07.435 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:07.435 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:07.435 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:07.435 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:07.435 associated memzone info: size: 0.015991 MiB name: RG_ring_3_453979 00:04:07.435 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:07.435 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:07.435 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:07.435 associated memzone info: size: 0.000183 MiB name: MP_msgpool_453979 00:04:07.435 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:07.435 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_453979 00:04:07.435 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:07.435 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:07.435 11:08:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:07.435 11:08:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 453979 00:04:07.435 11:08:33 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 453979 ']' 00:04:07.435 11:08:33 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 453979 00:04:07.435 11:08:33 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:07.435 11:08:33 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:07.435 11:08:33 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 453979 00:04:07.693 11:08:33 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:07.693 11:08:33 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:07.693 11:08:33 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 453979' 00:04:07.693 killing process with pid 453979 00:04:07.693 11:08:33 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 453979 00:04:07.693 11:08:33 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 453979 00:04:07.951 00:04:07.951 real 0m1.092s 00:04:07.951 user 0m1.058s 00:04:07.951 sys 0m0.401s 00:04:07.951 11:08:34 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.951 11:08:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:07.951 ************************************ 00:04:07.951 END TEST dpdk_mem_utility 00:04:07.951 ************************************ 00:04:07.951 11:08:34 -- common/autotest_common.sh@1142 -- # return 0 00:04:07.951 11:08:34 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:07.951 11:08:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.951 11:08:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.951 11:08:34 -- common/autotest_common.sh@10 -- # set +x 00:04:07.951 ************************************ 00:04:07.951 START TEST event 00:04:07.951 ************************************ 00:04:07.951 11:08:34 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:08.210 * Looking for test storage... 00:04:08.210 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:08.210 11:08:34 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:08.210 11:08:34 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:08.210 11:08:34 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:08.210 11:08:34 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:08.210 11:08:34 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.210 11:08:34 event -- common/autotest_common.sh@10 -- # set +x 00:04:08.210 ************************************ 00:04:08.210 START TEST event_perf 00:04:08.210 ************************************ 00:04:08.210 11:08:34 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:08.210 Running I/O for 1 seconds...[2024-07-12 11:08:34.151018] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:04:08.210 [2024-07-12 11:08:34.151088] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid454203 ] 00:04:08.210 EAL: No free 2048 kB hugepages reported on node 1 00:04:08.210 [2024-07-12 11:08:34.209283] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:08.210 [2024-07-12 11:08:34.316517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:08.210 [2024-07-12 11:08:34.316583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:08.210 [2024-07-12 11:08:34.316646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:08.210 [2024-07-12 11:08:34.316649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.581 Running I/O for 1 seconds... 00:04:09.581 lcore 0: 233192 00:04:09.581 lcore 1: 233191 00:04:09.581 lcore 2: 233192 00:04:09.581 lcore 3: 233192 00:04:09.581 done. 00:04:09.581 00:04:09.581 real 0m1.289s 00:04:09.581 user 0m4.200s 00:04:09.581 sys 0m0.083s 00:04:09.581 11:08:35 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:09.581 11:08:35 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:09.581 ************************************ 00:04:09.581 END TEST event_perf 00:04:09.581 ************************************ 00:04:09.581 11:08:35 event -- common/autotest_common.sh@1142 -- # return 0 00:04:09.581 11:08:35 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:09.581 11:08:35 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:09.581 11:08:35 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.581 11:08:35 event -- common/autotest_common.sh@10 -- # set +x 00:04:09.581 ************************************ 00:04:09.581 START TEST event_reactor 00:04:09.581 ************************************ 00:04:09.582 11:08:35 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:09.582 [2024-07-12 11:08:35.484761] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:04:09.582 [2024-07-12 11:08:35.484827] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid454363 ] 00:04:09.582 EAL: No free 2048 kB hugepages reported on node 1 00:04:09.582 [2024-07-12 11:08:35.541452] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.582 [2024-07-12 11:08:35.645452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.955 test_start 00:04:10.955 oneshot 00:04:10.955 tick 100 00:04:10.955 tick 100 00:04:10.955 tick 250 00:04:10.955 tick 100 00:04:10.955 tick 100 00:04:10.955 tick 100 00:04:10.955 tick 250 00:04:10.955 tick 500 00:04:10.955 tick 100 00:04:10.955 tick 100 00:04:10.955 tick 250 00:04:10.955 tick 100 00:04:10.955 tick 100 00:04:10.955 test_end 00:04:10.955 00:04:10.955 real 0m1.285s 00:04:10.955 user 0m1.206s 00:04:10.955 sys 0m0.075s 00:04:10.955 11:08:36 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.955 11:08:36 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:10.955 ************************************ 00:04:10.955 END TEST event_reactor 00:04:10.955 ************************************ 00:04:10.955 11:08:36 event -- common/autotest_common.sh@1142 -- # return 0 00:04:10.955 11:08:36 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:10.955 11:08:36 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:10.955 11:08:36 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.955 11:08:36 event -- common/autotest_common.sh@10 -- # set +x 00:04:10.955 ************************************ 00:04:10.955 START TEST event_reactor_perf 00:04:10.955 ************************************ 00:04:10.955 11:08:36 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:10.955 [2024-07-12 11:08:36.816966] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:04:10.955 [2024-07-12 11:08:36.817029] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid454523 ] 00:04:10.955 EAL: No free 2048 kB hugepages reported on node 1 00:04:10.955 [2024-07-12 11:08:36.875606] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.955 [2024-07-12 11:08:36.979317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.348 test_start 00:04:12.348 test_end 00:04:12.348 Performance: 442501 events per second 00:04:12.348 00:04:12.348 real 0m1.287s 00:04:12.348 user 0m1.215s 00:04:12.348 sys 0m0.069s 00:04:12.348 11:08:38 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.348 11:08:38 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:12.348 ************************************ 00:04:12.348 END TEST event_reactor_perf 00:04:12.348 ************************************ 00:04:12.348 11:08:38 event -- common/autotest_common.sh@1142 -- # return 0 00:04:12.348 11:08:38 event -- event/event.sh@49 -- # uname -s 00:04:12.348 11:08:38 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:12.348 11:08:38 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:12.348 11:08:38 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.348 11:08:38 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.348 11:08:38 event -- common/autotest_common.sh@10 -- # set +x 00:04:12.349 ************************************ 00:04:12.349 START TEST event_scheduler 00:04:12.349 ************************************ 00:04:12.349 11:08:38 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:12.349 * Looking for test storage... 00:04:12.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:12.349 11:08:38 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:12.349 11:08:38 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=454701 00:04:12.349 11:08:38 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:12.349 11:08:38 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:12.349 11:08:38 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 454701 00:04:12.349 11:08:38 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 454701 ']' 00:04:12.349 11:08:38 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:12.349 11:08:38 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:12.349 11:08:38 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:12.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:12.349 11:08:38 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:12.349 11:08:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:12.349 [2024-07-12 11:08:38.238970] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:04:12.349 [2024-07-12 11:08:38.239047] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid454701 ] 00:04:12.349 EAL: No free 2048 kB hugepages reported on node 1 00:04:12.349 [2024-07-12 11:08:38.297837] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:12.349 [2024-07-12 11:08:38.406139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.349 [2024-07-12 11:08:38.406218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:12.349 [2024-07-12 11:08:38.406197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:12.349 [2024-07-12 11:08:38.406222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:12.349 11:08:38 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:12.349 11:08:38 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:04:12.349 11:08:38 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:12.349 11:08:38 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:12.349 11:08:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:12.349 [2024-07-12 11:08:38.438997] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:12.349 [2024-07-12 11:08:38.439025] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:12.349 [2024-07-12 11:08:38.439044] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:12.349 [2024-07-12 11:08:38.439055] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:12.349 [2024-07-12 11:08:38.439065] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:12.349 11:08:38 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:12.349 11:08:38 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:12.349 11:08:38 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:12.349 11:08:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:12.607 [2024-07-12 11:08:38.536317] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:12.607 11:08:38 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:12.607 11:08:38 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:12.607 11:08:38 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.607 11:08:38 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.608 11:08:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:12.608 ************************************ 00:04:12.608 START TEST scheduler_create_thread 00:04:12.608 ************************************ 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:12.608 2 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:12.608 3 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:12.608 4 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:12.608 5 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:12.608 6 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:12.608 7 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:12.608 8 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:12.608 9 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:12.608 10 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:12.608 11:08:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:13.173 11:08:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:13.173 00:04:13.173 real 0m0.591s 00:04:13.173 user 0m0.009s 00:04:13.174 sys 0m0.004s 00:04:13.174 11:08:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.174 11:08:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:13.174 ************************************ 00:04:13.174 END TEST scheduler_create_thread 00:04:13.174 ************************************ 00:04:13.174 11:08:39 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:04:13.174 11:08:39 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:13.174 11:08:39 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 454701 00:04:13.174 11:08:39 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 454701 ']' 00:04:13.174 11:08:39 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 454701 00:04:13.174 11:08:39 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:04:13.174 11:08:39 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:13.174 11:08:39 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 454701 00:04:13.174 11:08:39 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:13.174 11:08:39 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:13.174 11:08:39 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 454701' 00:04:13.174 killing process with pid 454701 00:04:13.174 11:08:39 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 454701 00:04:13.174 11:08:39 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 454701 00:04:13.739 [2024-07-12 11:08:39.632459] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:13.997 00:04:13.997 real 0m1.739s 00:04:13.997 user 0m2.139s 00:04:13.997 sys 0m0.308s 00:04:13.997 11:08:39 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.997 11:08:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:13.997 ************************************ 00:04:13.997 END TEST event_scheduler 00:04:13.997 ************************************ 00:04:13.997 11:08:39 event -- common/autotest_common.sh@1142 -- # return 0 00:04:13.997 11:08:39 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:13.997 11:08:39 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:13.997 11:08:39 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.997 11:08:39 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.997 11:08:39 event -- common/autotest_common.sh@10 -- # set +x 00:04:13.997 ************************************ 00:04:13.997 START TEST app_repeat 00:04:13.997 ************************************ 00:04:13.997 11:08:39 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:04:13.997 11:08:39 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:13.997 11:08:39 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.997 11:08:39 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:13.997 11:08:39 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:13.997 11:08:39 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:13.997 11:08:39 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:13.997 11:08:39 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:13.998 11:08:39 event.app_repeat -- event/event.sh@19 -- # repeat_pid=455017 00:04:13.998 11:08:39 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:13.998 11:08:39 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:13.998 11:08:39 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 455017' 00:04:13.998 Process app_repeat pid: 455017 00:04:13.998 11:08:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:13.998 11:08:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:13.998 spdk_app_start Round 0 00:04:13.998 11:08:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 455017 /var/tmp/spdk-nbd.sock 00:04:13.998 11:08:39 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 455017 ']' 00:04:13.998 11:08:39 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:13.998 11:08:39 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:13.998 11:08:39 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:13.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:13.998 11:08:39 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:13.998 11:08:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:13.998 [2024-07-12 11:08:39.964316] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:04:13.998 [2024-07-12 11:08:39.964383] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid455017 ] 00:04:13.998 EAL: No free 2048 kB hugepages reported on node 1 00:04:13.998 [2024-07-12 11:08:40.025100] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:14.255 [2024-07-12 11:08:40.131576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:14.255 [2024-07-12 11:08:40.131581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.255 11:08:40 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:14.255 11:08:40 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:14.255 11:08:40 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:14.513 Malloc0 00:04:14.513 11:08:40 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:14.772 Malloc1 00:04:14.772 11:08:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:14.772 11:08:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:14.772 11:08:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:14.772 11:08:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:14.772 11:08:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:14.772 11:08:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:14.772 11:08:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:14.772 11:08:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:14.772 11:08:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:14.772 11:08:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:14.772 11:08:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:14.772 11:08:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:14.772 11:08:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:14.772 11:08:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:14.772 11:08:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:14.772 11:08:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:15.030 /dev/nbd0 00:04:15.030 11:08:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:15.030 11:08:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:15.030 11:08:41 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:15.030 11:08:41 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:15.030 11:08:41 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:15.030 11:08:41 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:15.030 11:08:41 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:15.030 11:08:41 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:15.030 11:08:41 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:15.030 11:08:41 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:15.030 11:08:41 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:15.030 1+0 records in 00:04:15.030 1+0 records out 00:04:15.030 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221752 s, 18.5 MB/s 00:04:15.030 11:08:41 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:15.030 11:08:41 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:15.030 11:08:41 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:15.030 11:08:41 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:15.030 11:08:41 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:15.030 11:08:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:15.030 11:08:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:15.030 11:08:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:15.301 /dev/nbd1 00:04:15.301 11:08:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:15.301 11:08:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:15.301 11:08:41 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:15.301 11:08:41 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:15.301 11:08:41 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:15.301 11:08:41 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:15.301 11:08:41 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:15.301 11:08:41 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:15.301 11:08:41 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:15.301 11:08:41 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:15.301 11:08:41 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:15.301 1+0 records in 00:04:15.301 1+0 records out 00:04:15.301 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196048 s, 20.9 MB/s 00:04:15.301 11:08:41 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:15.301 11:08:41 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:15.301 11:08:41 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:15.301 11:08:41 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:15.301 11:08:41 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:15.301 11:08:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:15.301 11:08:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:15.301 11:08:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:15.301 11:08:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:15.301 11:08:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:15.614 11:08:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:15.614 { 00:04:15.614 "nbd_device": "/dev/nbd0", 00:04:15.614 "bdev_name": "Malloc0" 00:04:15.614 }, 00:04:15.614 { 00:04:15.614 "nbd_device": "/dev/nbd1", 00:04:15.614 "bdev_name": "Malloc1" 00:04:15.614 } 00:04:15.614 ]' 00:04:15.614 11:08:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:15.614 { 00:04:15.614 "nbd_device": "/dev/nbd0", 00:04:15.614 "bdev_name": "Malloc0" 00:04:15.614 }, 00:04:15.614 { 00:04:15.614 "nbd_device": "/dev/nbd1", 00:04:15.614 "bdev_name": "Malloc1" 00:04:15.614 } 00:04:15.614 ]' 00:04:15.614 11:08:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:15.614 11:08:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:15.614 /dev/nbd1' 00:04:15.614 11:08:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:15.614 /dev/nbd1' 00:04:15.614 11:08:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:15.614 11:08:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:15.614 11:08:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:15.614 11:08:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:15.614 11:08:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:15.614 11:08:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:15.614 11:08:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:15.614 11:08:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:15.614 11:08:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:15.614 11:08:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:15.614 11:08:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:15.614 11:08:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:15.614 256+0 records in 00:04:15.614 256+0 records out 00:04:15.614 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00495505 s, 212 MB/s 00:04:15.614 11:08:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:15.614 11:08:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:15.614 256+0 records in 00:04:15.614 256+0 records out 00:04:15.614 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203185 s, 51.6 MB/s 00:04:15.614 11:08:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:15.614 11:08:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:15.614 256+0 records in 00:04:15.614 256+0 records out 00:04:15.614 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225563 s, 46.5 MB/s 00:04:15.614 11:08:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:15.614 11:08:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:15.614 11:08:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:15.614 11:08:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:15.614 11:08:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:15.614 11:08:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:15.614 11:08:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:15.614 11:08:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:15.614 11:08:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:15.614 11:08:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:15.614 11:08:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:15.614 11:08:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:15.614 11:08:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:15.614 11:08:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:15.614 11:08:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:15.614 11:08:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:15.614 11:08:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:15.614 11:08:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:15.614 11:08:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:15.873 11:08:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:15.873 11:08:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:15.873 11:08:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:15.873 11:08:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:15.873 11:08:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:15.873 11:08:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:15.873 11:08:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:15.873 11:08:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:15.873 11:08:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:15.873 11:08:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:16.129 11:08:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:16.129 11:08:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:16.129 11:08:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:16.129 11:08:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:16.129 11:08:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:16.129 11:08:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:16.129 11:08:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:16.129 11:08:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:16.129 11:08:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:16.129 11:08:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:16.129 11:08:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:16.385 11:08:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:16.385 11:08:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:16.385 11:08:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:16.385 11:08:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:16.385 11:08:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:16.385 11:08:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:16.385 11:08:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:16.385 11:08:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:16.385 11:08:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:16.385 11:08:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:16.385 11:08:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:16.385 11:08:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:16.385 11:08:42 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:16.641 11:08:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:16.899 [2024-07-12 11:08:43.020540] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:17.156 [2024-07-12 11:08:43.135858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.156 [2024-07-12 11:08:43.135859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:17.156 [2024-07-12 11:08:43.188178] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:17.156 [2024-07-12 11:08:43.188258] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:19.677 11:08:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:19.677 11:08:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:19.677 spdk_app_start Round 1 00:04:19.677 11:08:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 455017 /var/tmp/spdk-nbd.sock 00:04:19.677 11:08:45 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 455017 ']' 00:04:19.677 11:08:45 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:19.677 11:08:45 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:19.677 11:08:45 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:19.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:19.677 11:08:45 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:19.677 11:08:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:19.934 11:08:46 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:19.934 11:08:46 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:19.934 11:08:46 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:20.191 Malloc0 00:04:20.191 11:08:46 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:20.449 Malloc1 00:04:20.449 11:08:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:20.449 11:08:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.449 11:08:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:20.449 11:08:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:20.449 11:08:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:20.449 11:08:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:20.449 11:08:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:20.449 11:08:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.449 11:08:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:20.449 11:08:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:20.449 11:08:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:20.449 11:08:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:20.449 11:08:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:20.449 11:08:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:20.449 11:08:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:20.449 11:08:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:20.706 /dev/nbd0 00:04:20.706 11:08:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:20.706 11:08:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:20.706 11:08:46 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:20.706 11:08:46 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:20.706 11:08:46 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:20.706 11:08:46 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:20.706 11:08:46 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:20.706 11:08:46 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:20.706 11:08:46 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:20.706 11:08:46 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:20.706 11:08:46 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:20.706 1+0 records in 00:04:20.706 1+0 records out 00:04:20.706 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181852 s, 22.5 MB/s 00:04:20.706 11:08:46 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:20.706 11:08:46 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:20.706 11:08:46 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:20.706 11:08:46 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:20.706 11:08:46 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:20.706 11:08:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:20.706 11:08:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:20.706 11:08:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:20.962 /dev/nbd1 00:04:20.962 11:08:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:20.962 11:08:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:20.962 11:08:47 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:20.962 11:08:47 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:20.962 11:08:47 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:20.962 11:08:47 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:20.962 11:08:47 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:20.962 11:08:47 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:20.962 11:08:47 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:20.962 11:08:47 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:20.962 11:08:47 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:20.962 1+0 records in 00:04:20.962 1+0 records out 00:04:20.962 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251242 s, 16.3 MB/s 00:04:20.962 11:08:47 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:20.962 11:08:47 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:20.962 11:08:47 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:20.962 11:08:47 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:20.962 11:08:47 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:20.962 11:08:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:20.963 11:08:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:20.963 11:08:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:20.963 11:08:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.963 11:08:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:21.218 11:08:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:21.218 { 00:04:21.218 "nbd_device": "/dev/nbd0", 00:04:21.218 "bdev_name": "Malloc0" 00:04:21.218 }, 00:04:21.218 { 00:04:21.218 "nbd_device": "/dev/nbd1", 00:04:21.218 "bdev_name": "Malloc1" 00:04:21.218 } 00:04:21.218 ]' 00:04:21.218 11:08:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:21.218 { 00:04:21.218 "nbd_device": "/dev/nbd0", 00:04:21.218 "bdev_name": "Malloc0" 00:04:21.218 }, 00:04:21.218 { 00:04:21.218 "nbd_device": "/dev/nbd1", 00:04:21.218 "bdev_name": "Malloc1" 00:04:21.218 } 00:04:21.218 ]' 00:04:21.218 11:08:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:21.474 11:08:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:21.474 /dev/nbd1' 00:04:21.474 11:08:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:21.474 /dev/nbd1' 00:04:21.474 11:08:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:21.474 11:08:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:21.474 11:08:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:21.474 11:08:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:21.474 11:08:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:21.474 11:08:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:21.474 11:08:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:21.474 11:08:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:21.474 11:08:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:21.474 11:08:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:21.474 11:08:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:21.474 11:08:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:21.474 256+0 records in 00:04:21.474 256+0 records out 00:04:21.474 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00500054 s, 210 MB/s 00:04:21.474 11:08:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:21.474 11:08:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:21.474 256+0 records in 00:04:21.474 256+0 records out 00:04:21.474 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202151 s, 51.9 MB/s 00:04:21.474 11:08:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:21.474 11:08:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:21.474 256+0 records in 00:04:21.474 256+0 records out 00:04:21.474 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.022141 s, 47.4 MB/s 00:04:21.474 11:08:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:21.475 11:08:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:21.475 11:08:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:21.475 11:08:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:21.475 11:08:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:21.475 11:08:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:21.475 11:08:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:21.475 11:08:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:21.475 11:08:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:21.475 11:08:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:21.475 11:08:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:21.475 11:08:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:21.475 11:08:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:21.475 11:08:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:21.475 11:08:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:21.475 11:08:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:21.475 11:08:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:21.475 11:08:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:21.475 11:08:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:21.730 11:08:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:21.730 11:08:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:21.730 11:08:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:21.730 11:08:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:21.730 11:08:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:21.730 11:08:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:21.730 11:08:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:21.730 11:08:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:21.730 11:08:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:21.730 11:08:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:21.986 11:08:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:21.986 11:08:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:21.986 11:08:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:21.986 11:08:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:21.986 11:08:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:21.986 11:08:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:21.986 11:08:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:21.986 11:08:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:21.986 11:08:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:21.986 11:08:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:21.986 11:08:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:22.243 11:08:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:22.243 11:08:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:22.243 11:08:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:22.243 11:08:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:22.243 11:08:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:22.243 11:08:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:22.243 11:08:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:22.243 11:08:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:22.243 11:08:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:22.243 11:08:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:22.243 11:08:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:22.243 11:08:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:22.243 11:08:48 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:22.499 11:08:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:22.755 [2024-07-12 11:08:48.786182] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:23.012 [2024-07-12 11:08:48.891577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:23.012 [2024-07-12 11:08:48.891580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.012 [2024-07-12 11:08:48.949462] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:23.012 [2024-07-12 11:08:48.949535] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:25.538 11:08:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:25.538 11:08:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:25.538 spdk_app_start Round 2 00:04:25.538 11:08:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 455017 /var/tmp/spdk-nbd.sock 00:04:25.538 11:08:51 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 455017 ']' 00:04:25.538 11:08:51 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:25.538 11:08:51 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:25.538 11:08:51 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:25.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:25.538 11:08:51 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:25.538 11:08:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:25.796 11:08:51 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:25.796 11:08:51 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:25.796 11:08:51 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:26.054 Malloc0 00:04:26.054 11:08:52 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:26.312 Malloc1 00:04:26.312 11:08:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:26.312 11:08:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.312 11:08:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:26.312 11:08:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:26.312 11:08:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.312 11:08:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:26.312 11:08:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:26.312 11:08:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.312 11:08:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:26.312 11:08:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:26.312 11:08:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.312 11:08:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:26.312 11:08:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:26.312 11:08:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:26.312 11:08:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:26.312 11:08:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:26.571 /dev/nbd0 00:04:26.571 11:08:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:26.571 11:08:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:26.571 11:08:52 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:26.571 11:08:52 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:26.571 11:08:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:26.571 11:08:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:26.571 11:08:52 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:26.571 11:08:52 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:26.571 11:08:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:26.571 11:08:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:26.571 11:08:52 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:26.571 1+0 records in 00:04:26.571 1+0 records out 00:04:26.571 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022457 s, 18.2 MB/s 00:04:26.571 11:08:52 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:26.571 11:08:52 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:26.571 11:08:52 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:26.571 11:08:52 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:26.571 11:08:52 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:26.571 11:08:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:26.571 11:08:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:26.571 11:08:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:26.829 /dev/nbd1 00:04:26.829 11:08:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:26.829 11:08:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:26.829 11:08:52 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:26.829 11:08:52 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:26.829 11:08:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:26.829 11:08:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:26.829 11:08:52 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:26.829 11:08:52 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:26.829 11:08:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:26.829 11:08:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:26.829 11:08:52 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:26.829 1+0 records in 00:04:26.829 1+0 records out 00:04:26.829 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000183345 s, 22.3 MB/s 00:04:26.829 11:08:52 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:26.829 11:08:52 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:26.829 11:08:52 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:26.829 11:08:52 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:26.829 11:08:52 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:26.829 11:08:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:26.829 11:08:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:26.829 11:08:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:26.829 11:08:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.829 11:08:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:27.086 11:08:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:27.086 { 00:04:27.086 "nbd_device": "/dev/nbd0", 00:04:27.086 "bdev_name": "Malloc0" 00:04:27.086 }, 00:04:27.086 { 00:04:27.086 "nbd_device": "/dev/nbd1", 00:04:27.086 "bdev_name": "Malloc1" 00:04:27.086 } 00:04:27.086 ]' 00:04:27.086 11:08:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:27.086 { 00:04:27.086 "nbd_device": "/dev/nbd0", 00:04:27.086 "bdev_name": "Malloc0" 00:04:27.086 }, 00:04:27.086 { 00:04:27.086 "nbd_device": "/dev/nbd1", 00:04:27.086 "bdev_name": "Malloc1" 00:04:27.086 } 00:04:27.086 ]' 00:04:27.086 11:08:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:27.086 11:08:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:27.086 /dev/nbd1' 00:04:27.086 11:08:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:27.086 /dev/nbd1' 00:04:27.086 11:08:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:27.086 11:08:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:27.086 11:08:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:27.086 11:08:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:27.086 11:08:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:27.086 11:08:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:27.086 11:08:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:27.086 11:08:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:27.086 11:08:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:27.086 11:08:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:27.086 11:08:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:27.086 11:08:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:27.086 256+0 records in 00:04:27.086 256+0 records out 00:04:27.086 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00522801 s, 201 MB/s 00:04:27.086 11:08:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:27.086 11:08:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:27.086 256+0 records in 00:04:27.086 256+0 records out 00:04:27.086 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215702 s, 48.6 MB/s 00:04:27.086 11:08:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:27.086 11:08:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:27.086 256+0 records in 00:04:27.086 256+0 records out 00:04:27.086 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236054 s, 44.4 MB/s 00:04:27.086 11:08:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:27.086 11:08:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:27.086 11:08:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:27.086 11:08:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:27.086 11:08:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:27.086 11:08:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:27.086 11:08:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:27.086 11:08:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:27.086 11:08:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:27.086 11:08:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:27.086 11:08:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:27.086 11:08:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:27.086 11:08:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:27.086 11:08:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.086 11:08:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:27.086 11:08:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:27.086 11:08:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:27.086 11:08:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:27.086 11:08:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:27.343 11:08:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:27.343 11:08:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:27.343 11:08:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:27.343 11:08:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:27.343 11:08:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:27.343 11:08:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:27.343 11:08:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:27.343 11:08:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:27.343 11:08:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:27.343 11:08:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:27.600 11:08:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:27.600 11:08:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:27.600 11:08:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:27.600 11:08:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:27.601 11:08:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:27.601 11:08:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:27.601 11:08:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:27.601 11:08:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:27.601 11:08:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:27.601 11:08:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.601 11:08:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:27.859 11:08:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:27.859 11:08:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:27.859 11:08:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:28.117 11:08:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:28.117 11:08:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:28.117 11:08:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:28.117 11:08:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:28.117 11:08:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:28.117 11:08:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:28.117 11:08:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:28.117 11:08:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:28.117 11:08:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:28.117 11:08:53 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:28.374 11:08:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:28.631 [2024-07-12 11:08:54.510112] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:28.631 [2024-07-12 11:08:54.614025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.631 [2024-07-12 11:08:54.614025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:28.631 [2024-07-12 11:08:54.672458] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:28.631 [2024-07-12 11:08:54.672531] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:31.155 11:08:57 event.app_repeat -- event/event.sh@38 -- # waitforlisten 455017 /var/tmp/spdk-nbd.sock 00:04:31.155 11:08:57 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 455017 ']' 00:04:31.155 11:08:57 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:31.155 11:08:57 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:31.155 11:08:57 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:31.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:31.155 11:08:57 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:31.155 11:08:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:31.413 11:08:57 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:31.413 11:08:57 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:31.413 11:08:57 event.app_repeat -- event/event.sh@39 -- # killprocess 455017 00:04:31.413 11:08:57 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 455017 ']' 00:04:31.413 11:08:57 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 455017 00:04:31.413 11:08:57 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:04:31.413 11:08:57 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:31.413 11:08:57 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 455017 00:04:31.413 11:08:57 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:31.413 11:08:57 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:31.413 11:08:57 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 455017' 00:04:31.413 killing process with pid 455017 00:04:31.413 11:08:57 event.app_repeat -- common/autotest_common.sh@967 -- # kill 455017 00:04:31.413 11:08:57 event.app_repeat -- common/autotest_common.sh@972 -- # wait 455017 00:04:31.671 spdk_app_start is called in Round 0. 00:04:31.671 Shutdown signal received, stop current app iteration 00:04:31.671 Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 reinitialization... 00:04:31.671 spdk_app_start is called in Round 1. 00:04:31.671 Shutdown signal received, stop current app iteration 00:04:31.671 Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 reinitialization... 00:04:31.671 spdk_app_start is called in Round 2. 00:04:31.671 Shutdown signal received, stop current app iteration 00:04:31.671 Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 reinitialization... 00:04:31.671 spdk_app_start is called in Round 3. 00:04:31.671 Shutdown signal received, stop current app iteration 00:04:31.671 11:08:57 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:31.671 11:08:57 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:31.671 00:04:31.671 real 0m17.834s 00:04:31.671 user 0m38.665s 00:04:31.671 sys 0m3.166s 00:04:31.671 11:08:57 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.671 11:08:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:31.671 ************************************ 00:04:31.671 END TEST app_repeat 00:04:31.671 ************************************ 00:04:31.671 11:08:57 event -- common/autotest_common.sh@1142 -- # return 0 00:04:31.671 11:08:57 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:31.671 11:08:57 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:31.671 11:08:57 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.671 11:08:57 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.671 11:08:57 event -- common/autotest_common.sh@10 -- # set +x 00:04:31.929 ************************************ 00:04:31.929 START TEST cpu_locks 00:04:31.929 ************************************ 00:04:31.929 11:08:57 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:31.929 * Looking for test storage... 00:04:31.929 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:31.929 11:08:57 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:31.929 11:08:57 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:31.929 11:08:57 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:31.929 11:08:57 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:31.929 11:08:57 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.929 11:08:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.929 11:08:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:31.929 ************************************ 00:04:31.929 START TEST default_locks 00:04:31.929 ************************************ 00:04:31.929 11:08:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:04:31.929 11:08:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=457361 00:04:31.929 11:08:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:31.929 11:08:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 457361 00:04:31.929 11:08:57 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 457361 ']' 00:04:31.929 11:08:57 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.929 11:08:57 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:31.930 11:08:57 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.930 11:08:57 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:31.930 11:08:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:31.930 [2024-07-12 11:08:57.950509] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:04:31.930 [2024-07-12 11:08:57.950582] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid457361 ] 00:04:31.930 EAL: No free 2048 kB hugepages reported on node 1 00:04:31.930 [2024-07-12 11:08:58.011478] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.188 [2024-07-12 11:08:58.118737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.446 11:08:58 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:32.446 11:08:58 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:04:32.446 11:08:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 457361 00:04:32.446 11:08:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 457361 00:04:32.446 11:08:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:32.704 lslocks: write error 00:04:32.704 11:08:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 457361 00:04:32.704 11:08:58 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 457361 ']' 00:04:32.704 11:08:58 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 457361 00:04:32.704 11:08:58 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:04:32.704 11:08:58 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:32.704 11:08:58 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 457361 00:04:32.704 11:08:58 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:32.704 11:08:58 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:32.704 11:08:58 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 457361' 00:04:32.704 killing process with pid 457361 00:04:32.704 11:08:58 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 457361 00:04:32.704 11:08:58 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 457361 00:04:33.270 11:08:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 457361 00:04:33.271 11:08:59 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:04:33.271 11:08:59 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 457361 00:04:33.271 11:08:59 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:33.271 11:08:59 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:33.271 11:08:59 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:33.271 11:08:59 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:33.271 11:08:59 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 457361 00:04:33.271 11:08:59 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 457361 ']' 00:04:33.271 11:08:59 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.271 11:08:59 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:33.271 11:08:59 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.271 11:08:59 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:33.271 11:08:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:33.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (457361) - No such process 00:04:33.271 ERROR: process (pid: 457361) is no longer running 00:04:33.271 11:08:59 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:33.271 11:08:59 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:04:33.271 11:08:59 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:04:33.271 11:08:59 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:33.271 11:08:59 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:33.271 11:08:59 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:33.271 11:08:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:33.271 11:08:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:33.271 11:08:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:33.271 11:08:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:33.271 00:04:33.271 real 0m1.273s 00:04:33.271 user 0m1.213s 00:04:33.271 sys 0m0.527s 00:04:33.271 11:08:59 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.271 11:08:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:33.271 ************************************ 00:04:33.271 END TEST default_locks 00:04:33.271 ************************************ 00:04:33.271 11:08:59 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:33.271 11:08:59 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:33.271 11:08:59 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.271 11:08:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.271 11:08:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:33.271 ************************************ 00:04:33.271 START TEST default_locks_via_rpc 00:04:33.271 ************************************ 00:04:33.271 11:08:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:04:33.271 11:08:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=457532 00:04:33.271 11:08:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:33.271 11:08:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 457532 00:04:33.271 11:08:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 457532 ']' 00:04:33.271 11:08:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.271 11:08:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:33.271 11:08:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.271 11:08:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:33.271 11:08:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.271 [2024-07-12 11:08:59.276218] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:04:33.271 [2024-07-12 11:08:59.276306] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid457532 ] 00:04:33.271 EAL: No free 2048 kB hugepages reported on node 1 00:04:33.271 [2024-07-12 11:08:59.333068] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.529 [2024-07-12 11:08:59.434299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.786 11:08:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:33.787 11:08:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:33.787 11:08:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:33.787 11:08:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.787 11:08:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.787 11:08:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.787 11:08:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:33.787 11:08:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:33.787 11:08:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:33.787 11:08:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:33.787 11:08:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:33.787 11:08:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.787 11:08:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.787 11:08:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.787 11:08:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 457532 00:04:33.787 11:08:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 457532 00:04:33.787 11:08:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:34.044 11:08:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 457532 00:04:34.044 11:08:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 457532 ']' 00:04:34.044 11:08:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 457532 00:04:34.044 11:08:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:04:34.044 11:08:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:34.044 11:08:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 457532 00:04:34.044 11:08:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:34.044 11:08:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:34.044 11:08:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 457532' 00:04:34.044 killing process with pid 457532 00:04:34.044 11:08:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 457532 00:04:34.045 11:08:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 457532 00:04:34.303 00:04:34.303 real 0m1.159s 00:04:34.303 user 0m1.106s 00:04:34.303 sys 0m0.489s 00:04:34.303 11:09:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.303 11:09:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.303 ************************************ 00:04:34.303 END TEST default_locks_via_rpc 00:04:34.303 ************************************ 00:04:34.303 11:09:00 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:34.303 11:09:00 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:34.303 11:09:00 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.303 11:09:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.303 11:09:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:34.303 ************************************ 00:04:34.303 START TEST non_locking_app_on_locked_coremask 00:04:34.303 ************************************ 00:04:34.303 11:09:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:04:34.303 11:09:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=457726 00:04:34.303 11:09:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:34.303 11:09:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 457726 /var/tmp/spdk.sock 00:04:34.303 11:09:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 457726 ']' 00:04:34.303 11:09:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.303 11:09:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:34.303 11:09:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.303 11:09:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:34.303 11:09:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:34.561 [2024-07-12 11:09:00.484807] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:04:34.561 [2024-07-12 11:09:00.484907] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid457726 ] 00:04:34.561 EAL: No free 2048 kB hugepages reported on node 1 00:04:34.561 [2024-07-12 11:09:00.542741] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.561 [2024-07-12 11:09:00.650711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.818 11:09:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:34.818 11:09:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:34.818 11:09:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=457753 00:04:34.818 11:09:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:34.818 11:09:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 457753 /var/tmp/spdk2.sock 00:04:34.818 11:09:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 457753 ']' 00:04:34.818 11:09:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:34.818 11:09:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:34.818 11:09:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:34.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:34.818 11:09:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:34.818 11:09:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:34.818 [2024-07-12 11:09:00.937733] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:04:34.818 [2024-07-12 11:09:00.937825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid457753 ] 00:04:35.076 EAL: No free 2048 kB hugepages reported on node 1 00:04:35.076 [2024-07-12 11:09:01.020474] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:35.076 [2024-07-12 11:09:01.020501] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.336 [2024-07-12 11:09:01.242029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.901 11:09:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:35.901 11:09:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:35.901 11:09:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 457726 00:04:35.901 11:09:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 457726 00:04:35.901 11:09:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:36.466 lslocks: write error 00:04:36.466 11:09:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 457726 00:04:36.466 11:09:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 457726 ']' 00:04:36.466 11:09:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 457726 00:04:36.466 11:09:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:36.466 11:09:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:36.466 11:09:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 457726 00:04:36.466 11:09:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:36.466 11:09:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:36.466 11:09:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 457726' 00:04:36.466 killing process with pid 457726 00:04:36.466 11:09:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 457726 00:04:36.466 11:09:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 457726 00:04:37.400 11:09:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 457753 00:04:37.400 11:09:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 457753 ']' 00:04:37.400 11:09:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 457753 00:04:37.400 11:09:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:37.400 11:09:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:37.400 11:09:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 457753 00:04:37.400 11:09:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:37.400 11:09:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:37.400 11:09:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 457753' 00:04:37.400 killing process with pid 457753 00:04:37.400 11:09:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 457753 00:04:37.400 11:09:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 457753 00:04:37.657 00:04:37.657 real 0m3.316s 00:04:37.657 user 0m3.540s 00:04:37.657 sys 0m0.994s 00:04:37.657 11:09:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.657 11:09:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:37.657 ************************************ 00:04:37.657 END TEST non_locking_app_on_locked_coremask 00:04:37.657 ************************************ 00:04:37.657 11:09:03 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:37.657 11:09:03 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:37.657 11:09:03 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.657 11:09:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.657 11:09:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:37.914 ************************************ 00:04:37.914 START TEST locking_app_on_unlocked_coremask 00:04:37.914 ************************************ 00:04:37.914 11:09:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:04:37.914 11:09:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=458238 00:04:37.914 11:09:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:37.914 11:09:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 458238 /var/tmp/spdk.sock 00:04:37.914 11:09:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 458238 ']' 00:04:37.914 11:09:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.914 11:09:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:37.914 11:09:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.914 11:09:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:37.914 11:09:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:37.914 [2024-07-12 11:09:03.849091] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:04:37.914 [2024-07-12 11:09:03.849197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid458238 ] 00:04:37.914 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.914 [2024-07-12 11:09:03.906134] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:37.914 [2024-07-12 11:09:03.906184] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.914 [2024-07-12 11:09:04.017008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.172 11:09:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:38.172 11:09:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:38.172 11:09:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=458250 00:04:38.172 11:09:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:38.172 11:09:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 458250 /var/tmp/spdk2.sock 00:04:38.172 11:09:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 458250 ']' 00:04:38.172 11:09:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:38.172 11:09:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:38.172 11:09:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:38.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:38.172 11:09:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:38.172 11:09:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:38.429 [2024-07-12 11:09:04.310116] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:04:38.429 [2024-07-12 11:09:04.310201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid458250 ] 00:04:38.429 EAL: No free 2048 kB hugepages reported on node 1 00:04:38.429 [2024-07-12 11:09:04.393111] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.687 [2024-07-12 11:09:04.607649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.252 11:09:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:39.252 11:09:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:39.252 11:09:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 458250 00:04:39.252 11:09:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 458250 00:04:39.252 11:09:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:39.842 lslocks: write error 00:04:39.842 11:09:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 458238 00:04:39.842 11:09:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 458238 ']' 00:04:39.842 11:09:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 458238 00:04:39.842 11:09:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:39.842 11:09:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:39.842 11:09:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 458238 00:04:39.842 11:09:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:39.842 11:09:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:39.842 11:09:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 458238' 00:04:39.842 killing process with pid 458238 00:04:39.842 11:09:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 458238 00:04:39.842 11:09:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 458238 00:04:40.776 11:09:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 458250 00:04:40.776 11:09:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 458250 ']' 00:04:40.776 11:09:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 458250 00:04:40.776 11:09:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:40.776 11:09:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:40.776 11:09:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 458250 00:04:40.776 11:09:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:40.776 11:09:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:40.776 11:09:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 458250' 00:04:40.776 killing process with pid 458250 00:04:40.776 11:09:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 458250 00:04:40.776 11:09:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 458250 00:04:41.342 00:04:41.342 real 0m3.400s 00:04:41.342 user 0m3.579s 00:04:41.342 sys 0m1.068s 00:04:41.342 11:09:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.342 11:09:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:41.342 ************************************ 00:04:41.342 END TEST locking_app_on_unlocked_coremask 00:04:41.342 ************************************ 00:04:41.342 11:09:07 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:41.342 11:09:07 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:41.342 11:09:07 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.342 11:09:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.342 11:09:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:41.342 ************************************ 00:04:41.342 START TEST locking_app_on_locked_coremask 00:04:41.342 ************************************ 00:04:41.342 11:09:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:04:41.342 11:09:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=458695 00:04:41.342 11:09:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:41.342 11:09:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 458695 /var/tmp/spdk.sock 00:04:41.342 11:09:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 458695 ']' 00:04:41.342 11:09:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.342 11:09:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:41.342 11:09:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.342 11:09:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:41.342 11:09:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:41.342 [2024-07-12 11:09:07.298634] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:04:41.342 [2024-07-12 11:09:07.298715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid458695 ] 00:04:41.342 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.342 [2024-07-12 11:09:07.358007] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.342 [2024-07-12 11:09:07.468870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.600 11:09:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:41.600 11:09:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:41.600 11:09:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=458928 00:04:41.600 11:09:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 458928 /var/tmp/spdk2.sock 00:04:41.600 11:09:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:41.600 11:09:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:04:41.600 11:09:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 458928 /var/tmp/spdk2.sock 00:04:41.600 11:09:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:41.600 11:09:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:41.600 11:09:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:41.600 11:09:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:41.600 11:09:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 458928 /var/tmp/spdk2.sock 00:04:41.600 11:09:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 458928 ']' 00:04:41.600 11:09:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:41.600 11:09:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:41.600 11:09:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:41.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:41.600 11:09:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:41.600 11:09:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:41.859 [2024-07-12 11:09:07.767052] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:04:41.859 [2024-07-12 11:09:07.767138] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid458928 ] 00:04:41.859 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.859 [2024-07-12 11:09:07.860603] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 458695 has claimed it. 00:04:41.859 [2024-07-12 11:09:07.860668] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:42.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (458928) - No such process 00:04:42.425 ERROR: process (pid: 458928) is no longer running 00:04:42.425 11:09:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:42.425 11:09:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:04:42.425 11:09:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:04:42.425 11:09:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:42.425 11:09:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:42.425 11:09:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:42.425 11:09:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 458695 00:04:42.425 11:09:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 458695 00:04:42.425 11:09:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:42.683 lslocks: write error 00:04:42.683 11:09:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 458695 00:04:42.683 11:09:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 458695 ']' 00:04:42.683 11:09:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 458695 00:04:42.683 11:09:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:42.683 11:09:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:42.683 11:09:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 458695 00:04:42.683 11:09:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:42.683 11:09:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:42.683 11:09:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 458695' 00:04:42.683 killing process with pid 458695 00:04:42.683 11:09:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 458695 00:04:42.683 11:09:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 458695 00:04:43.248 00:04:43.248 real 0m1.956s 00:04:43.248 user 0m2.142s 00:04:43.248 sys 0m0.585s 00:04:43.248 11:09:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.248 11:09:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:43.248 ************************************ 00:04:43.248 END TEST locking_app_on_locked_coremask 00:04:43.248 ************************************ 00:04:43.248 11:09:09 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:43.248 11:09:09 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:43.248 11:09:09 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.248 11:09:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.248 11:09:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:43.248 ************************************ 00:04:43.248 START TEST locking_overlapped_coremask 00:04:43.248 ************************************ 00:04:43.248 11:09:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:04:43.248 11:09:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=459479 00:04:43.248 11:09:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:43.248 11:09:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 459479 /var/tmp/spdk.sock 00:04:43.248 11:09:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 459479 ']' 00:04:43.248 11:09:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.248 11:09:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:43.248 11:09:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.248 11:09:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:43.248 11:09:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:43.248 [2024-07-12 11:09:09.305046] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:04:43.248 [2024-07-12 11:09:09.305135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid459479 ] 00:04:43.248 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.248 [2024-07-12 11:09:09.361800] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:43.506 [2024-07-12 11:09:09.472423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.506 [2024-07-12 11:09:09.472467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:43.506 [2024-07-12 11:09:09.472471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.765 11:09:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:43.765 11:09:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:43.765 11:09:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=459490 00:04:43.765 11:09:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 459490 /var/tmp/spdk2.sock 00:04:43.765 11:09:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:04:43.765 11:09:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 459490 /var/tmp/spdk2.sock 00:04:43.765 11:09:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:43.765 11:09:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:43.765 11:09:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:43.765 11:09:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:43.765 11:09:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:43.765 11:09:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 459490 /var/tmp/spdk2.sock 00:04:43.765 11:09:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 459490 ']' 00:04:43.765 11:09:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:43.765 11:09:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:43.765 11:09:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:43.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:43.765 11:09:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:43.765 11:09:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:43.765 [2024-07-12 11:09:09.781349] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:04:43.765 [2024-07-12 11:09:09.781440] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid459490 ] 00:04:43.765 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.765 [2024-07-12 11:09:09.871430] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 459479 has claimed it. 00:04:43.765 [2024-07-12 11:09:09.871492] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:44.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (459490) - No such process 00:04:44.699 ERROR: process (pid: 459490) is no longer running 00:04:44.699 11:09:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:44.699 11:09:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:04:44.699 11:09:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:04:44.699 11:09:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:44.699 11:09:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:44.699 11:09:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:44.699 11:09:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:44.699 11:09:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:44.699 11:09:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:44.699 11:09:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:44.699 11:09:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 459479 00:04:44.699 11:09:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 459479 ']' 00:04:44.699 11:09:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 459479 00:04:44.699 11:09:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:04:44.699 11:09:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:44.699 11:09:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 459479 00:04:44.699 11:09:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:44.699 11:09:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:44.699 11:09:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 459479' 00:04:44.699 killing process with pid 459479 00:04:44.699 11:09:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 459479 00:04:44.699 11:09:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 459479 00:04:44.958 00:04:44.958 real 0m1.701s 00:04:44.958 user 0m4.544s 00:04:44.958 sys 0m0.453s 00:04:44.958 11:09:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.958 11:09:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:44.958 ************************************ 00:04:44.958 END TEST locking_overlapped_coremask 00:04:44.958 ************************************ 00:04:44.958 11:09:10 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:44.958 11:09:10 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:44.958 11:09:10 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.958 11:09:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.958 11:09:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:44.958 ************************************ 00:04:44.958 START TEST locking_overlapped_coremask_via_rpc 00:04:44.958 ************************************ 00:04:44.958 11:09:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:04:44.958 11:09:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=459655 00:04:44.958 11:09:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 459655 /var/tmp/spdk.sock 00:04:44.958 11:09:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 459655 ']' 00:04:44.958 11:09:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:44.958 11:09:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.958 11:09:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:44.958 11:09:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.958 11:09:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:44.958 11:09:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.958 [2024-07-12 11:09:11.056431] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:04:44.958 [2024-07-12 11:09:11.056521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid459655 ] 00:04:44.958 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.216 [2024-07-12 11:09:11.117179] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:45.216 [2024-07-12 11:09:11.117213] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:45.216 [2024-07-12 11:09:11.227926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.216 [2024-07-12 11:09:11.227983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:45.216 [2024-07-12 11:09:11.227986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.474 11:09:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:45.474 11:09:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:45.474 11:09:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=459786 00:04:45.474 11:09:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:45.474 11:09:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 459786 /var/tmp/spdk2.sock 00:04:45.474 11:09:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 459786 ']' 00:04:45.474 11:09:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:45.474 11:09:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:45.474 11:09:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:45.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:45.474 11:09:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:45.474 11:09:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.474 [2024-07-12 11:09:11.529417] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:04:45.474 [2024-07-12 11:09:11.529502] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid459786 ] 00:04:45.474 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.732 [2024-07-12 11:09:11.615552] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:45.732 [2024-07-12 11:09:11.615590] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:45.732 [2024-07-12 11:09:11.839614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:45.732 [2024-07-12 11:09:11.842904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:04:45.732 [2024-07-12 11:09:11.842907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:46.666 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:46.666 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:46.666 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:46.666 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.666 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.666 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.666 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:46.666 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:46.666 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:46.666 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:46.666 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:46.666 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:46.666 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:46.666 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:46.666 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.666 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.667 [2024-07-12 11:09:12.506962] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 459655 has claimed it. 00:04:46.667 request: 00:04:46.667 { 00:04:46.667 "method": "framework_enable_cpumask_locks", 00:04:46.667 "req_id": 1 00:04:46.667 } 00:04:46.667 Got JSON-RPC error response 00:04:46.667 response: 00:04:46.667 { 00:04:46.667 "code": -32603, 00:04:46.667 "message": "Failed to claim CPU core: 2" 00:04:46.667 } 00:04:46.667 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:46.667 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:46.667 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:46.667 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:46.667 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:46.667 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 459655 /var/tmp/spdk.sock 00:04:46.667 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 459655 ']' 00:04:46.667 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.667 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:46.667 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.667 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:46.667 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.667 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:46.667 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:46.667 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 459786 /var/tmp/spdk2.sock 00:04:46.667 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 459786 ']' 00:04:46.667 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:46.667 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:46.667 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:46.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:46.667 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:46.667 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.924 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:46.924 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:46.924 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:46.924 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:46.924 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:46.924 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:46.924 00:04:46.924 real 0m1.992s 00:04:46.924 user 0m1.048s 00:04:46.924 sys 0m0.162s 00:04:46.924 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.924 11:09:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.924 ************************************ 00:04:46.924 END TEST locking_overlapped_coremask_via_rpc 00:04:46.925 ************************************ 00:04:46.925 11:09:13 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:46.925 11:09:13 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:46.925 11:09:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 459655 ]] 00:04:46.925 11:09:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 459655 00:04:46.925 11:09:13 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 459655 ']' 00:04:46.925 11:09:13 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 459655 00:04:46.925 11:09:13 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:04:46.925 11:09:13 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:46.925 11:09:13 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 459655 00:04:46.925 11:09:13 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:46.925 11:09:13 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:46.925 11:09:13 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 459655' 00:04:46.925 killing process with pid 459655 00:04:46.925 11:09:13 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 459655 00:04:46.925 11:09:13 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 459655 00:04:47.489 11:09:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 459786 ]] 00:04:47.489 11:09:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 459786 00:04:47.489 11:09:13 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 459786 ']' 00:04:47.489 11:09:13 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 459786 00:04:47.489 11:09:13 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:04:47.489 11:09:13 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:47.489 11:09:13 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 459786 00:04:47.489 11:09:13 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:47.489 11:09:13 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:47.489 11:09:13 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 459786' 00:04:47.489 killing process with pid 459786 00:04:47.489 11:09:13 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 459786 00:04:47.489 11:09:13 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 459786 00:04:48.055 11:09:13 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:48.055 11:09:13 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:48.055 11:09:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 459655 ]] 00:04:48.055 11:09:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 459655 00:04:48.055 11:09:13 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 459655 ']' 00:04:48.055 11:09:13 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 459655 00:04:48.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (459655) - No such process 00:04:48.055 11:09:13 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 459655 is not found' 00:04:48.055 Process with pid 459655 is not found 00:04:48.055 11:09:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 459786 ]] 00:04:48.055 11:09:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 459786 00:04:48.055 11:09:13 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 459786 ']' 00:04:48.055 11:09:13 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 459786 00:04:48.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (459786) - No such process 00:04:48.055 11:09:13 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 459786 is not found' 00:04:48.055 Process with pid 459786 is not found 00:04:48.055 11:09:13 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:48.055 00:04:48.055 real 0m16.148s 00:04:48.055 user 0m28.120s 00:04:48.055 sys 0m5.166s 00:04:48.055 11:09:13 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.055 11:09:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:48.055 ************************************ 00:04:48.055 END TEST cpu_locks 00:04:48.055 ************************************ 00:04:48.055 11:09:13 event -- common/autotest_common.sh@1142 -- # return 0 00:04:48.055 00:04:48.055 real 0m39.937s 00:04:48.055 user 1m15.699s 00:04:48.055 sys 0m9.088s 00:04:48.055 11:09:13 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.055 11:09:13 event -- common/autotest_common.sh@10 -- # set +x 00:04:48.055 ************************************ 00:04:48.055 END TEST event 00:04:48.055 ************************************ 00:04:48.055 11:09:14 -- common/autotest_common.sh@1142 -- # return 0 00:04:48.055 11:09:14 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:48.055 11:09:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.055 11:09:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.055 11:09:14 -- common/autotest_common.sh@10 -- # set +x 00:04:48.055 ************************************ 00:04:48.055 START TEST thread 00:04:48.055 ************************************ 00:04:48.055 11:09:14 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:48.056 * Looking for test storage... 00:04:48.056 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:48.056 11:09:14 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:48.056 11:09:14 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:04:48.056 11:09:14 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.056 11:09:14 thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.056 ************************************ 00:04:48.056 START TEST thread_poller_perf 00:04:48.056 ************************************ 00:04:48.056 11:09:14 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:48.056 [2024-07-12 11:09:14.138427] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:04:48.056 [2024-07-12 11:09:14.138493] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid460149 ] 00:04:48.056 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.314 [2024-07-12 11:09:14.201055] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.314 [2024-07-12 11:09:14.312140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.314 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:49.687 ====================================== 00:04:49.687 busy:2707463688 (cyc) 00:04:49.687 total_run_count: 366000 00:04:49.687 tsc_hz: 2700000000 (cyc) 00:04:49.687 ====================================== 00:04:49.687 poller_cost: 7397 (cyc), 2739 (nsec) 00:04:49.687 00:04:49.687 real 0m1.304s 00:04:49.687 user 0m1.211s 00:04:49.687 sys 0m0.084s 00:04:49.687 11:09:15 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.687 11:09:15 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:49.687 ************************************ 00:04:49.687 END TEST thread_poller_perf 00:04:49.687 ************************************ 00:04:49.687 11:09:15 thread -- common/autotest_common.sh@1142 -- # return 0 00:04:49.687 11:09:15 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:49.687 11:09:15 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:04:49.687 11:09:15 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.687 11:09:15 thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.687 ************************************ 00:04:49.687 START TEST thread_poller_perf 00:04:49.687 ************************************ 00:04:49.687 11:09:15 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:49.687 [2024-07-12 11:09:15.490719] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:04:49.687 [2024-07-12 11:09:15.490782] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid460312 ] 00:04:49.687 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.687 [2024-07-12 11:09:15.548324] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.687 [2024-07-12 11:09:15.654265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.687 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:51.061 ====================================== 00:04:51.061 busy:2702519010 (cyc) 00:04:51.061 total_run_count: 4867000 00:04:51.061 tsc_hz: 2700000000 (cyc) 00:04:51.061 ====================================== 00:04:51.061 poller_cost: 555 (cyc), 205 (nsec) 00:04:51.061 00:04:51.061 real 0m1.290s 00:04:51.061 user 0m1.210s 00:04:51.061 sys 0m0.074s 00:04:51.061 11:09:16 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.061 11:09:16 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:51.061 ************************************ 00:04:51.061 END TEST thread_poller_perf 00:04:51.061 ************************************ 00:04:51.061 11:09:16 thread -- common/autotest_common.sh@1142 -- # return 0 00:04:51.061 11:09:16 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:51.061 00:04:51.061 real 0m2.752s 00:04:51.061 user 0m2.477s 00:04:51.061 sys 0m0.272s 00:04:51.061 11:09:16 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.061 11:09:16 thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.061 ************************************ 00:04:51.061 END TEST thread 00:04:51.061 ************************************ 00:04:51.061 11:09:16 -- common/autotest_common.sh@1142 -- # return 0 00:04:51.061 11:09:16 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:04:51.061 11:09:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.061 11:09:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.061 11:09:16 -- common/autotest_common.sh@10 -- # set +x 00:04:51.061 ************************************ 00:04:51.061 START TEST accel 00:04:51.061 ************************************ 00:04:51.061 11:09:16 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:04:51.061 * Looking for test storage... 00:04:51.061 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:04:51.061 11:09:16 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:04:51.061 11:09:16 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:04:51.061 11:09:16 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:51.061 11:09:16 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=460621 00:04:51.061 11:09:16 accel -- accel/accel.sh@63 -- # waitforlisten 460621 00:04:51.061 11:09:16 accel -- common/autotest_common.sh@829 -- # '[' -z 460621 ']' 00:04:51.061 11:09:16 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.061 11:09:16 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:04:51.061 11:09:16 accel -- accel/accel.sh@61 -- # build_accel_config 00:04:51.061 11:09:16 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:51.061 11:09:16 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:51.062 11:09:16 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.062 11:09:16 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:51.062 11:09:16 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:51.062 11:09:16 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:51.062 11:09:16 accel -- common/autotest_common.sh@10 -- # set +x 00:04:51.062 11:09:16 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:51.062 11:09:16 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:51.062 11:09:16 accel -- accel/accel.sh@40 -- # local IFS=, 00:04:51.062 11:09:16 accel -- accel/accel.sh@41 -- # jq -r . 00:04:51.062 [2024-07-12 11:09:16.947885] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:04:51.062 [2024-07-12 11:09:16.947992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid460621 ] 00:04:51.062 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.062 [2024-07-12 11:09:17.004652] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.062 [2024-07-12 11:09:17.108814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.321 11:09:17 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:51.321 11:09:17 accel -- common/autotest_common.sh@862 -- # return 0 00:04:51.321 11:09:17 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:04:51.321 11:09:17 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:04:51.321 11:09:17 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:04:51.321 11:09:17 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:04:51.321 11:09:17 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:04:51.321 11:09:17 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:04:51.321 11:09:17 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.321 11:09:17 accel -- common/autotest_common.sh@10 -- # set +x 00:04:51.321 11:09:17 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:04:51.321 11:09:17 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.321 11:09:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:51.321 11:09:17 accel -- accel/accel.sh@72 -- # IFS== 00:04:51.321 11:09:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:51.321 11:09:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:51.321 11:09:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:51.321 11:09:17 accel -- accel/accel.sh@72 -- # IFS== 00:04:51.321 11:09:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:51.321 11:09:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:51.321 11:09:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:51.321 11:09:17 accel -- accel/accel.sh@72 -- # IFS== 00:04:51.321 11:09:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:51.321 11:09:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:51.321 11:09:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:51.321 11:09:17 accel -- accel/accel.sh@72 -- # IFS== 00:04:51.321 11:09:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:51.321 11:09:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:51.321 11:09:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:51.321 11:09:17 accel -- accel/accel.sh@72 -- # IFS== 00:04:51.321 11:09:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:51.321 11:09:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:51.321 11:09:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:51.321 11:09:17 accel -- accel/accel.sh@72 -- # IFS== 00:04:51.321 11:09:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:51.321 11:09:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:51.321 11:09:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:51.321 11:09:17 accel -- accel/accel.sh@72 -- # IFS== 00:04:51.321 11:09:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:51.321 11:09:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:51.321 11:09:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:51.321 11:09:17 accel -- accel/accel.sh@72 -- # IFS== 00:04:51.321 11:09:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:51.321 11:09:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:51.321 11:09:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:51.321 11:09:17 accel -- accel/accel.sh@72 -- # IFS== 00:04:51.321 11:09:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:51.321 11:09:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:51.321 11:09:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:51.321 11:09:17 accel -- accel/accel.sh@72 -- # IFS== 00:04:51.321 11:09:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:51.321 11:09:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:51.321 11:09:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:51.321 11:09:17 accel -- accel/accel.sh@72 -- # IFS== 00:04:51.321 11:09:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:51.321 11:09:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:51.321 11:09:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:51.321 11:09:17 accel -- accel/accel.sh@72 -- # IFS== 00:04:51.321 11:09:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:51.321 11:09:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:51.321 11:09:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:51.321 11:09:17 accel -- accel/accel.sh@72 -- # IFS== 00:04:51.321 11:09:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:51.321 11:09:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:51.321 11:09:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:51.321 11:09:17 accel -- accel/accel.sh@72 -- # IFS== 00:04:51.321 11:09:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:51.321 11:09:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:51.321 11:09:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:51.321 11:09:17 accel -- accel/accel.sh@72 -- # IFS== 00:04:51.321 11:09:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:51.321 11:09:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:51.321 11:09:17 accel -- accel/accel.sh@75 -- # killprocess 460621 00:04:51.321 11:09:17 accel -- common/autotest_common.sh@948 -- # '[' -z 460621 ']' 00:04:51.321 11:09:17 accel -- common/autotest_common.sh@952 -- # kill -0 460621 00:04:51.321 11:09:17 accel -- common/autotest_common.sh@953 -- # uname 00:04:51.321 11:09:17 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:51.321 11:09:17 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 460621 00:04:51.321 11:09:17 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:51.321 11:09:17 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:51.321 11:09:17 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 460621' 00:04:51.321 killing process with pid 460621 00:04:51.321 11:09:17 accel -- common/autotest_common.sh@967 -- # kill 460621 00:04:51.321 11:09:17 accel -- common/autotest_common.sh@972 -- # wait 460621 00:04:51.887 11:09:17 accel -- accel/accel.sh@76 -- # trap - ERR 00:04:51.887 11:09:17 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:04:51.887 11:09:17 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:04:51.887 11:09:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.887 11:09:17 accel -- common/autotest_common.sh@10 -- # set +x 00:04:51.887 11:09:17 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:04:51.887 11:09:17 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:04:51.887 11:09:17 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:04:51.887 11:09:17 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:51.887 11:09:17 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:51.887 11:09:17 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:51.887 11:09:17 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:51.887 11:09:17 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:51.887 11:09:17 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:04:51.887 11:09:17 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:04:51.887 11:09:17 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.888 11:09:17 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:04:51.888 11:09:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:51.888 11:09:17 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:04:51.888 11:09:17 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:51.888 11:09:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.888 11:09:17 accel -- common/autotest_common.sh@10 -- # set +x 00:04:51.888 ************************************ 00:04:51.888 START TEST accel_missing_filename 00:04:51.888 ************************************ 00:04:51.888 11:09:17 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:04:51.888 11:09:17 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:04:51.888 11:09:17 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:04:51.888 11:09:17 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:51.888 11:09:17 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:51.888 11:09:17 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:51.888 11:09:17 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:51.888 11:09:17 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:04:51.888 11:09:17 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:04:51.888 11:09:17 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:04:51.888 11:09:17 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:51.888 11:09:17 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:51.888 11:09:17 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:51.888 11:09:17 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:51.888 11:09:17 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:51.888 11:09:17 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:04:51.888 11:09:17 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:04:51.888 [2024-07-12 11:09:17.981177] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:04:51.888 [2024-07-12 11:09:17.981246] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid460779 ] 00:04:51.888 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.145 [2024-07-12 11:09:18.039749] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.145 [2024-07-12 11:09:18.151050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.145 [2024-07-12 11:09:18.210523] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:52.403 [2024-07-12 11:09:18.292602] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:04:52.403 A filename is required. 00:04:52.403 11:09:18 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:04:52.403 11:09:18 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:52.403 11:09:18 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:04:52.403 11:09:18 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:04:52.403 11:09:18 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:04:52.403 11:09:18 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:52.403 00:04:52.403 real 0m0.442s 00:04:52.403 user 0m0.340s 00:04:52.403 sys 0m0.136s 00:04:52.403 11:09:18 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.403 11:09:18 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:04:52.403 ************************************ 00:04:52.403 END TEST accel_missing_filename 00:04:52.403 ************************************ 00:04:52.403 11:09:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:52.403 11:09:18 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:52.403 11:09:18 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:04:52.403 11:09:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.403 11:09:18 accel -- common/autotest_common.sh@10 -- # set +x 00:04:52.403 ************************************ 00:04:52.403 START TEST accel_compress_verify 00:04:52.403 ************************************ 00:04:52.403 11:09:18 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:52.403 11:09:18 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:04:52.403 11:09:18 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:52.403 11:09:18 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:52.403 11:09:18 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:52.403 11:09:18 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:52.403 11:09:18 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:52.403 11:09:18 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:52.404 11:09:18 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:52.404 11:09:18 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:04:52.404 11:09:18 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:52.404 11:09:18 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:52.404 11:09:18 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:52.404 11:09:18 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:52.404 11:09:18 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:52.404 11:09:18 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:04:52.404 11:09:18 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:04:52.404 [2024-07-12 11:09:18.471683] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:04:52.404 [2024-07-12 11:09:18.471749] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid460823 ] 00:04:52.404 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.404 [2024-07-12 11:09:18.528309] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.662 [2024-07-12 11:09:18.635552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.662 [2024-07-12 11:09:18.692115] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:52.662 [2024-07-12 11:09:18.776021] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:04:52.920 00:04:52.920 Compression does not support the verify option, aborting. 00:04:52.920 11:09:18 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:04:52.920 11:09:18 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:52.920 11:09:18 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:04:52.920 11:09:18 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:04:52.920 11:09:18 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:04:52.920 11:09:18 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:52.920 00:04:52.920 real 0m0.435s 00:04:52.920 user 0m0.332s 00:04:52.920 sys 0m0.137s 00:04:52.920 11:09:18 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.920 11:09:18 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:04:52.920 ************************************ 00:04:52.920 END TEST accel_compress_verify 00:04:52.920 ************************************ 00:04:52.920 11:09:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:52.920 11:09:18 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:04:52.920 11:09:18 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:52.920 11:09:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.920 11:09:18 accel -- common/autotest_common.sh@10 -- # set +x 00:04:52.920 ************************************ 00:04:52.920 START TEST accel_wrong_workload 00:04:52.920 ************************************ 00:04:52.920 11:09:18 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:04:52.920 11:09:18 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:04:52.920 11:09:18 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:04:52.921 11:09:18 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:52.921 11:09:18 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:52.921 11:09:18 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:52.921 11:09:18 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:52.921 11:09:18 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:04:52.921 11:09:18 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:04:52.921 11:09:18 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:04:52.921 11:09:18 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:52.921 11:09:18 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:52.921 11:09:18 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:52.921 11:09:18 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:52.921 11:09:18 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:52.921 11:09:18 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:04:52.921 11:09:18 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:04:52.921 Unsupported workload type: foobar 00:04:52.921 [2024-07-12 11:09:18.950945] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:04:52.921 accel_perf options: 00:04:52.921 [-h help message] 00:04:52.921 [-q queue depth per core] 00:04:52.921 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:52.921 [-T number of threads per core 00:04:52.921 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:52.921 [-t time in seconds] 00:04:52.921 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:52.921 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:04:52.921 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:52.921 [-l for compress/decompress workloads, name of uncompressed input file 00:04:52.921 [-S for crc32c workload, use this seed value (default 0) 00:04:52.921 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:52.921 [-f for fill workload, use this BYTE value (default 255) 00:04:52.921 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:52.921 [-y verify result if this switch is on] 00:04:52.921 [-a tasks to allocate per core (default: same value as -q)] 00:04:52.921 Can be used to spread operations across a wider range of memory. 00:04:52.921 11:09:18 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:04:52.921 11:09:18 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:52.921 11:09:18 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:52.921 11:09:18 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:52.921 00:04:52.921 real 0m0.023s 00:04:52.921 user 0m0.016s 00:04:52.921 sys 0m0.007s 00:04:52.921 11:09:18 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.921 11:09:18 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:04:52.921 ************************************ 00:04:52.921 END TEST accel_wrong_workload 00:04:52.921 ************************************ 00:04:52.921 Error: writing output failed: Broken pipe 00:04:52.921 11:09:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:52.921 11:09:18 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:04:52.921 11:09:18 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:04:52.921 11:09:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.921 11:09:18 accel -- common/autotest_common.sh@10 -- # set +x 00:04:52.921 ************************************ 00:04:52.921 START TEST accel_negative_buffers 00:04:52.921 ************************************ 00:04:52.921 11:09:18 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:04:52.921 11:09:18 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:04:52.921 11:09:18 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:04:52.921 11:09:18 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:52.921 11:09:18 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:52.921 11:09:18 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:52.921 11:09:18 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:52.921 11:09:18 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:04:52.921 11:09:18 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:04:52.921 11:09:18 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:04:52.921 11:09:18 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:52.921 11:09:18 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:52.921 11:09:18 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:52.921 11:09:18 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:52.921 11:09:18 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:52.921 11:09:18 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:04:52.921 11:09:18 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:04:52.921 -x option must be non-negative. 00:04:52.921 [2024-07-12 11:09:19.014761] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:04:52.921 accel_perf options: 00:04:52.921 [-h help message] 00:04:52.921 [-q queue depth per core] 00:04:52.921 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:52.921 [-T number of threads per core 00:04:52.921 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:52.921 [-t time in seconds] 00:04:52.921 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:52.921 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:04:52.921 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:52.921 [-l for compress/decompress workloads, name of uncompressed input file 00:04:52.921 [-S for crc32c workload, use this seed value (default 0) 00:04:52.921 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:52.921 [-f for fill workload, use this BYTE value (default 255) 00:04:52.921 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:52.921 [-y verify result if this switch is on] 00:04:52.921 [-a tasks to allocate per core (default: same value as -q)] 00:04:52.921 Can be used to spread operations across a wider range of memory. 00:04:52.921 11:09:19 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:04:52.921 11:09:19 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:52.921 11:09:19 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:52.921 11:09:19 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:52.921 00:04:52.921 real 0m0.023s 00:04:52.921 user 0m0.013s 00:04:52.921 sys 0m0.010s 00:04:52.921 11:09:19 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.921 11:09:19 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:04:52.921 ************************************ 00:04:52.921 END TEST accel_negative_buffers 00:04:52.921 ************************************ 00:04:52.921 Error: writing output failed: Broken pipe 00:04:52.921 11:09:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:52.921 11:09:19 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:04:52.921 11:09:19 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:04:52.921 11:09:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.921 11:09:19 accel -- common/autotest_common.sh@10 -- # set +x 00:04:53.179 ************************************ 00:04:53.179 START TEST accel_crc32c 00:04:53.179 ************************************ 00:04:53.179 11:09:19 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:04:53.179 [2024-07-12 11:09:19.079079] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:04:53.179 [2024-07-12 11:09:19.079147] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid460958 ] 00:04:53.179 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.179 [2024-07-12 11:09:19.138628] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.179 [2024-07-12 11:09:19.246085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:04:53.179 11:09:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:53.180 11:09:19 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:04:53.180 11:09:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:53.180 11:09:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:53.180 11:09:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:53.180 11:09:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:53.180 11:09:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:53.180 11:09:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:53.180 11:09:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:53.180 11:09:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:53.180 11:09:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:53.180 11:09:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:53.180 11:09:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:04:53.180 11:09:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:53.180 11:09:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:53.180 11:09:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:53.180 11:09:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:04:53.180 11:09:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:53.180 11:09:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:53.180 11:09:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:53.180 11:09:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:04:53.180 11:09:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:53.180 11:09:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:53.180 11:09:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:53.180 11:09:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:53.180 11:09:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:53.180 11:09:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:53.180 11:09:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:53.180 11:09:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:53.180 11:09:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:53.180 11:09:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:53.180 11:09:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:54.548 11:09:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:54.548 11:09:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:54.548 11:09:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:54.548 11:09:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:54.548 11:09:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:54.548 11:09:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:54.548 11:09:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:54.548 11:09:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:54.548 11:09:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:54.548 11:09:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:54.548 11:09:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:54.548 11:09:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:54.548 11:09:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:54.548 11:09:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:54.548 11:09:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:54.548 11:09:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:54.548 11:09:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:54.548 11:09:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:54.548 11:09:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:54.548 11:09:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:54.548 11:09:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:54.548 11:09:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:54.548 11:09:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:54.548 11:09:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:54.548 11:09:20 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:54.548 11:09:20 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:04:54.548 11:09:20 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:54.548 00:04:54.548 real 0m1.438s 00:04:54.548 user 0m1.292s 00:04:54.548 sys 0m0.147s 00:04:54.548 11:09:20 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.548 11:09:20 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:04:54.548 ************************************ 00:04:54.548 END TEST accel_crc32c 00:04:54.548 ************************************ 00:04:54.548 11:09:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:54.548 11:09:20 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:04:54.548 11:09:20 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:04:54.548 11:09:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.548 11:09:20 accel -- common/autotest_common.sh@10 -- # set +x 00:04:54.548 ************************************ 00:04:54.548 START TEST accel_crc32c_C2 00:04:54.548 ************************************ 00:04:54.548 11:09:20 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:04:54.548 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:04:54.548 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:04:54.548 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.548 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:04:54.548 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.548 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:04:54.548 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:04:54.548 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:54.548 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:54.548 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:54.548 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:54.548 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:54.548 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:04:54.548 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:04:54.548 [2024-07-12 11:09:20.565921] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:04:54.548 [2024-07-12 11:09:20.565984] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid461164 ] 00:04:54.548 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.548 [2024-07-12 11:09:20.623664] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.806 [2024-07-12 11:09:20.729339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.806 11:09:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:56.233 11:09:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:56.233 11:09:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.233 11:09:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:56.233 11:09:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:56.233 11:09:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:56.233 11:09:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.233 11:09:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:56.233 11:09:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:56.233 11:09:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:56.233 11:09:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.233 11:09:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:56.233 11:09:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:56.233 11:09:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:56.233 11:09:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.233 11:09:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:56.233 11:09:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:56.233 11:09:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:56.233 11:09:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.233 11:09:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:56.233 11:09:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:56.233 11:09:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:56.233 11:09:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:56.233 11:09:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:56.233 11:09:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:56.233 11:09:21 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:56.233 11:09:21 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:04:56.233 11:09:21 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:56.233 00:04:56.233 real 0m1.440s 00:04:56.233 user 0m1.306s 00:04:56.233 sys 0m0.136s 00:04:56.233 11:09:21 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.233 11:09:21 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:04:56.233 ************************************ 00:04:56.233 END TEST accel_crc32c_C2 00:04:56.233 ************************************ 00:04:56.233 11:09:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:56.233 11:09:22 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:04:56.233 11:09:22 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:56.233 11:09:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.233 11:09:22 accel -- common/autotest_common.sh@10 -- # set +x 00:04:56.233 ************************************ 00:04:56.233 START TEST accel_copy 00:04:56.233 ************************************ 00:04:56.233 11:09:22 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:04:56.233 11:09:22 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:04:56.233 11:09:22 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:04:56.233 11:09:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:56.233 11:09:22 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:04:56.233 11:09:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:56.233 11:09:22 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:04:56.233 11:09:22 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:04:56.233 11:09:22 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:56.233 11:09:22 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:56.233 11:09:22 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:56.233 11:09:22 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:56.233 11:09:22 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:56.233 11:09:22 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:04:56.233 11:09:22 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:04:56.233 [2024-07-12 11:09:22.055956] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:04:56.233 [2024-07-12 11:09:22.056018] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid461318 ] 00:04:56.234 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.234 [2024-07-12 11:09:22.114620] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.234 [2024-07-12 11:09:22.218048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:56.234 11:09:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.606 11:09:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:57.606 11:09:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.606 11:09:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.606 11:09:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.606 11:09:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:57.606 11:09:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.606 11:09:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.606 11:09:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.606 11:09:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:57.606 11:09:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.606 11:09:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.606 11:09:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.606 11:09:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:57.606 11:09:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.606 11:09:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.606 11:09:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.606 11:09:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:57.606 11:09:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.606 11:09:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.606 11:09:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.606 11:09:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:57.606 11:09:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.606 11:09:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.606 11:09:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.606 11:09:23 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:57.606 11:09:23 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:04:57.606 11:09:23 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:57.606 00:04:57.606 real 0m1.439s 00:04:57.606 user 0m1.304s 00:04:57.606 sys 0m0.136s 00:04:57.606 11:09:23 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.606 11:09:23 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:04:57.606 ************************************ 00:04:57.606 END TEST accel_copy 00:04:57.606 ************************************ 00:04:57.606 11:09:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:57.606 11:09:23 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:57.606 11:09:23 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:04:57.606 11:09:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.606 11:09:23 accel -- common/autotest_common.sh@10 -- # set +x 00:04:57.606 ************************************ 00:04:57.606 START TEST accel_fill 00:04:57.606 ************************************ 00:04:57.606 11:09:23 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:57.606 11:09:23 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:04:57.606 11:09:23 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:04:57.606 11:09:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.606 11:09:23 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:57.606 11:09:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.606 11:09:23 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:57.606 11:09:23 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:04:57.606 11:09:23 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:57.606 11:09:23 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:57.606 11:09:23 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:57.606 11:09:23 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:57.606 11:09:23 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:57.606 11:09:23 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:04:57.606 11:09:23 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:04:57.606 [2024-07-12 11:09:23.538882] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:04:57.606 [2024-07-12 11:09:23.538952] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid461505 ] 00:04:57.606 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.606 [2024-07-12 11:09:23.596743] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.606 [2024-07-12 11:09:23.702502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.864 11:09:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:57.865 11:09:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.865 11:09:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.865 11:09:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.865 11:09:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:57.865 11:09:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.865 11:09:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.865 11:09:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:59.236 11:09:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:59.236 11:09:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:59.236 11:09:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:59.236 11:09:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:59.236 11:09:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:59.236 11:09:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:59.236 11:09:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:59.236 11:09:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:59.236 11:09:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:59.236 11:09:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:59.236 11:09:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:59.236 11:09:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:59.236 11:09:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:59.236 11:09:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:59.236 11:09:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:59.236 11:09:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:59.236 11:09:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:59.236 11:09:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:59.236 11:09:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:59.236 11:09:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:59.236 11:09:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:59.236 11:09:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:59.236 11:09:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:59.236 11:09:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:59.236 11:09:24 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:59.236 11:09:24 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:04:59.236 11:09:24 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:59.236 00:04:59.236 real 0m1.438s 00:04:59.236 user 0m1.298s 00:04:59.236 sys 0m0.141s 00:04:59.237 11:09:24 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.237 11:09:24 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:04:59.237 ************************************ 00:04:59.237 END TEST accel_fill 00:04:59.237 ************************************ 00:04:59.237 11:09:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:59.237 11:09:24 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:04:59.237 11:09:24 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:59.237 11:09:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.237 11:09:24 accel -- common/autotest_common.sh@10 -- # set +x 00:04:59.237 ************************************ 00:04:59.237 START TEST accel_copy_crc32c 00:04:59.237 ************************************ 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:04:59.237 [2024-07-12 11:09:25.025617] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:04:59.237 [2024-07-12 11:09:25.025681] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid461753 ] 00:04:59.237 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.237 [2024-07-12 11:09:25.081668] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.237 [2024-07-12 11:09:25.184755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.237 11:09:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:00.610 11:09:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:00.610 11:09:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:00.610 11:09:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:00.610 11:09:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:00.610 11:09:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:00.610 11:09:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:00.610 11:09:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:00.610 11:09:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:00.610 11:09:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:00.610 11:09:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:00.610 11:09:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:00.610 11:09:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:00.610 11:09:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:00.610 11:09:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:00.610 11:09:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:00.610 11:09:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:00.610 11:09:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:00.610 11:09:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:00.610 11:09:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:00.610 11:09:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:00.610 11:09:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:00.610 11:09:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:00.610 11:09:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:00.610 11:09:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:00.610 11:09:26 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:00.610 11:09:26 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:00.610 11:09:26 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:00.610 00:05:00.610 real 0m1.434s 00:05:00.610 user 0m1.308s 00:05:00.610 sys 0m0.127s 00:05:00.610 11:09:26 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.610 11:09:26 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:00.610 ************************************ 00:05:00.610 END TEST accel_copy_crc32c 00:05:00.610 ************************************ 00:05:00.610 11:09:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:00.610 11:09:26 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:00.610 11:09:26 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:00.610 11:09:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.610 11:09:26 accel -- common/autotest_common.sh@10 -- # set +x 00:05:00.610 ************************************ 00:05:00.610 START TEST accel_copy_crc32c_C2 00:05:00.610 ************************************ 00:05:00.610 11:09:26 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:00.611 [2024-07-12 11:09:26.509458] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:05:00.611 [2024-07-12 11:09:26.509524] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid461911 ] 00:05:00.611 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.611 [2024-07-12 11:09:26.567093] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.611 [2024-07-12 11:09:26.668169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.611 11:09:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:01.985 11:09:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:01.985 11:09:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.985 11:09:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:01.985 11:09:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:01.985 11:09:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:01.985 11:09:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.985 11:09:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:01.985 11:09:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:01.985 11:09:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:01.985 11:09:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.985 11:09:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:01.985 11:09:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:01.985 11:09:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:01.985 11:09:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.985 11:09:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:01.985 11:09:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:01.985 11:09:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:01.985 11:09:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.985 11:09:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:01.985 11:09:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:01.985 11:09:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:01.985 11:09:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.985 11:09:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:01.985 11:09:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:01.985 11:09:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:01.985 11:09:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:01.985 11:09:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:01.985 00:05:01.985 real 0m1.434s 00:05:01.985 user 0m1.300s 00:05:01.985 sys 0m0.135s 00:05:01.985 11:09:27 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.985 11:09:27 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:01.985 ************************************ 00:05:01.985 END TEST accel_copy_crc32c_C2 00:05:01.985 ************************************ 00:05:01.985 11:09:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:01.985 11:09:27 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:01.985 11:09:27 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:01.985 11:09:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.985 11:09:27 accel -- common/autotest_common.sh@10 -- # set +x 00:05:01.985 ************************************ 00:05:01.985 START TEST accel_dualcast 00:05:01.985 ************************************ 00:05:01.985 11:09:27 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:01.985 11:09:27 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:01.985 11:09:27 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:01.985 11:09:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:01.985 11:09:27 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:01.985 11:09:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:01.985 11:09:27 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:01.985 11:09:27 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:01.985 11:09:27 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:01.985 11:09:27 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:01.985 11:09:27 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:01.985 11:09:27 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:01.985 11:09:27 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:01.985 11:09:27 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:01.985 11:09:27 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:01.985 [2024-07-12 11:09:27.992217] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:05:01.985 [2024-07-12 11:09:27.992280] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid462064 ] 00:05:01.985 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.985 [2024-07-12 11:09:28.051416] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.245 [2024-07-12 11:09:28.160367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:02.245 11:09:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:03.629 11:09:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:03.630 11:09:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:03.630 11:09:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:03.630 11:09:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:03.630 11:09:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:03.630 11:09:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:03.630 11:09:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:03.630 11:09:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:03.630 11:09:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:03.630 11:09:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:03.630 11:09:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:03.630 11:09:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:03.630 11:09:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:03.630 11:09:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:03.630 11:09:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:03.630 11:09:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:03.630 11:09:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:03.630 11:09:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:03.630 11:09:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:03.630 11:09:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:03.630 11:09:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:03.630 11:09:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:03.630 11:09:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:03.630 11:09:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:03.630 11:09:29 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:03.630 11:09:29 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:03.630 11:09:29 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:03.630 00:05:03.630 real 0m1.437s 00:05:03.630 user 0m1.301s 00:05:03.630 sys 0m0.137s 00:05:03.630 11:09:29 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.630 11:09:29 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:03.630 ************************************ 00:05:03.630 END TEST accel_dualcast 00:05:03.630 ************************************ 00:05:03.630 11:09:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:03.630 11:09:29 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:03.630 11:09:29 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:03.630 11:09:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.630 11:09:29 accel -- common/autotest_common.sh@10 -- # set +x 00:05:03.630 ************************************ 00:05:03.630 START TEST accel_compare 00:05:03.630 ************************************ 00:05:03.630 11:09:29 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:03.630 [2024-07-12 11:09:29.477725] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:05:03.630 [2024-07-12 11:09:29.477790] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid462341 ] 00:05:03.630 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.630 [2024-07-12 11:09:29.536400] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.630 [2024-07-12 11:09:29.652916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:03.630 11:09:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:05.042 11:09:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:05.042 11:09:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:05.042 11:09:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:05.042 11:09:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:05.042 11:09:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:05.042 11:09:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:05.042 11:09:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:05.042 11:09:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:05.042 11:09:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:05.042 11:09:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:05.042 11:09:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:05.042 11:09:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:05.042 11:09:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:05.042 11:09:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:05.042 11:09:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:05.042 11:09:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:05.042 11:09:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:05.042 11:09:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:05.042 11:09:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:05.042 11:09:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:05.042 11:09:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:05.042 11:09:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:05.042 11:09:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:05.042 11:09:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:05.042 11:09:30 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:05.042 11:09:30 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:05.042 11:09:30 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:05.042 00:05:05.042 real 0m1.438s 00:05:05.042 user 0m1.297s 00:05:05.042 sys 0m0.142s 00:05:05.042 11:09:30 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.042 11:09:30 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:05.042 ************************************ 00:05:05.042 END TEST accel_compare 00:05:05.042 ************************************ 00:05:05.042 11:09:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:05.042 11:09:30 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:05.042 11:09:30 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:05.042 11:09:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.042 11:09:30 accel -- common/autotest_common.sh@10 -- # set +x 00:05:05.042 ************************************ 00:05:05.042 START TEST accel_xor 00:05:05.042 ************************************ 00:05:05.042 11:09:30 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:05.042 11:09:30 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:05.042 11:09:30 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:05.042 11:09:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.042 11:09:30 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:05.042 11:09:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.042 11:09:30 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:05.042 11:09:30 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:05.042 11:09:30 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:05.042 11:09:30 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:05.042 11:09:30 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:05.042 11:09:30 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:05.042 11:09:30 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:05.042 11:09:30 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:05.042 11:09:30 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:05.042 [2024-07-12 11:09:30.968049] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:05:05.042 [2024-07-12 11:09:30.968113] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid462501 ] 00:05:05.042 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.042 [2024-07-12 11:09:31.025579] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.042 [2024-07-12 11:09:31.128173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.300 11:09:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.301 11:09:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.301 11:09:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.301 11:09:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.301 11:09:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.301 11:09:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.301 11:09:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.301 11:09:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.301 11:09:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:06.672 00:05:06.672 real 0m1.437s 00:05:06.672 user 0m1.304s 00:05:06.672 sys 0m0.134s 00:05:06.672 11:09:32 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.672 11:09:32 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:06.672 ************************************ 00:05:06.672 END TEST accel_xor 00:05:06.672 ************************************ 00:05:06.672 11:09:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:06.672 11:09:32 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:06.672 11:09:32 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:06.672 11:09:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.672 11:09:32 accel -- common/autotest_common.sh@10 -- # set +x 00:05:06.672 ************************************ 00:05:06.672 START TEST accel_xor 00:05:06.672 ************************************ 00:05:06.672 11:09:32 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:06.672 [2024-07-12 11:09:32.455668] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:05:06.672 [2024-07-12 11:09:32.455733] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid462658 ] 00:05:06.672 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.672 [2024-07-12 11:09:32.513454] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.672 [2024-07-12 11:09:32.623536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.672 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.673 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.673 11:09:32 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:06.673 11:09:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.673 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.673 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.673 11:09:32 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:06.673 11:09:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.673 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.673 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.673 11:09:32 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:06.673 11:09:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.673 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.673 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.673 11:09:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.673 11:09:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.673 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.673 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:06.673 11:09:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:06.673 11:09:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:06.673 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:06.673 11:09:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:08.044 11:09:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:08.044 11:09:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:08.044 11:09:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:08.044 11:09:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:08.044 11:09:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:08.044 11:09:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:08.044 11:09:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:08.044 11:09:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:08.044 11:09:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:08.044 11:09:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:08.044 11:09:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:08.044 11:09:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:08.044 11:09:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:08.044 11:09:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:08.044 11:09:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:08.044 11:09:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:08.044 11:09:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:08.044 11:09:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:08.044 11:09:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:08.044 11:09:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:08.044 11:09:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:08.044 11:09:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:08.044 11:09:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:08.044 11:09:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:08.044 11:09:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:08.044 11:09:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:08.044 11:09:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:08.044 00:05:08.044 real 0m1.434s 00:05:08.044 user 0m1.302s 00:05:08.044 sys 0m0.133s 00:05:08.044 11:09:33 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.044 11:09:33 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:08.044 ************************************ 00:05:08.044 END TEST accel_xor 00:05:08.044 ************************************ 00:05:08.044 11:09:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:08.044 11:09:33 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:08.044 11:09:33 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:08.044 11:09:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.044 11:09:33 accel -- common/autotest_common.sh@10 -- # set +x 00:05:08.044 ************************************ 00:05:08.044 START TEST accel_dif_verify 00:05:08.044 ************************************ 00:05:08.044 11:09:33 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:08.044 11:09:33 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:08.044 11:09:33 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:08.044 11:09:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.044 11:09:33 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:08.044 11:09:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.044 11:09:33 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:08.044 11:09:33 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:08.044 11:09:33 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:08.044 11:09:33 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:08.044 11:09:33 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:08.044 11:09:33 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:08.044 11:09:33 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:08.044 11:09:33 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:08.044 11:09:33 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:08.044 [2024-07-12 11:09:33.938182] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:05:08.044 [2024-07-12 11:09:33.938247] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid462931 ] 00:05:08.044 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.044 [2024-07-12 11:09:33.995802] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.044 [2024-07-12 11:09:34.100994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.044 11:09:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:08.044 11:09:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.044 11:09:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.044 11:09:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.044 11:09:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:08.044 11:09:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.044 11:09:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.044 11:09:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.044 11:09:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:08.044 11:09:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.044 11:09:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.044 11:09:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.044 11:09:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:08.044 11:09:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.045 11:09:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:09.418 11:09:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:09.418 11:09:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:09.418 11:09:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:09.418 11:09:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:09.418 11:09:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:09.418 11:09:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:09.418 11:09:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:09.418 11:09:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:09.418 11:09:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:09.418 11:09:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:09.418 11:09:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:09.418 11:09:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:09.418 11:09:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:09.418 11:09:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:09.418 11:09:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:09.418 11:09:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:09.418 11:09:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:09.418 11:09:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:09.418 11:09:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:09.418 11:09:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:09.418 11:09:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:09.418 11:09:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:09.418 11:09:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:09.418 11:09:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:09.418 11:09:35 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:09.418 11:09:35 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:09.418 11:09:35 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:09.418 00:05:09.418 real 0m1.419s 00:05:09.418 user 0m1.294s 00:05:09.418 sys 0m0.128s 00:05:09.418 11:09:35 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.418 11:09:35 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:09.419 ************************************ 00:05:09.419 END TEST accel_dif_verify 00:05:09.419 ************************************ 00:05:09.419 11:09:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:09.419 11:09:35 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:09.419 11:09:35 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:09.419 11:09:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.419 11:09:35 accel -- common/autotest_common.sh@10 -- # set +x 00:05:09.419 ************************************ 00:05:09.419 START TEST accel_dif_generate 00:05:09.419 ************************************ 00:05:09.419 11:09:35 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:05:09.419 11:09:35 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:09.419 11:09:35 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:09.419 11:09:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.419 11:09:35 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:09.419 11:09:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.419 11:09:35 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:09.419 11:09:35 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:09.419 11:09:35 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:09.419 11:09:35 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:09.419 11:09:35 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:09.419 11:09:35 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:09.419 11:09:35 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:09.419 11:09:35 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:09.419 11:09:35 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:09.419 [2024-07-12 11:09:35.407647] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:05:09.419 [2024-07-12 11:09:35.407713] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid463087 ] 00:05:09.419 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.419 [2024-07-12 11:09:35.464774] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.677 [2024-07-12 11:09:35.571209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.677 11:09:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:09.677 11:09:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.677 11:09:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.677 11:09:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.677 11:09:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:09.677 11:09:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.677 11:09:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.677 11:09:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.677 11:09:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:09.677 11:09:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.677 11:09:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.677 11:09:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.677 11:09:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:09.677 11:09:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.677 11:09:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.677 11:09:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.677 11:09:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:09.677 11:09:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.677 11:09:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.677 11:09:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.677 11:09:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:09.677 11:09:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.677 11:09:35 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:09.677 11:09:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.677 11:09:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.677 11:09:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:09.677 11:09:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.677 11:09:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.677 11:09:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.677 11:09:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:09.677 11:09:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.677 11:09:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.677 11:09:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.677 11:09:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:09.677 11:09:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.677 11:09:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.677 11:09:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.677 11:09:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:09.677 11:09:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.678 11:09:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.678 11:09:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.678 11:09:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:09.678 11:09:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.678 11:09:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.678 11:09:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.678 11:09:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:09.678 11:09:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.678 11:09:35 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:09.678 11:09:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.678 11:09:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.678 11:09:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:09.678 11:09:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.678 11:09:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.678 11:09:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.678 11:09:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:09.678 11:09:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.678 11:09:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.678 11:09:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.678 11:09:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:09.678 11:09:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.678 11:09:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.678 11:09:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.678 11:09:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:09.678 11:09:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.678 11:09:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.678 11:09:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.678 11:09:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:09.678 11:09:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.678 11:09:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.678 11:09:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.678 11:09:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:09.678 11:09:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.678 11:09:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.678 11:09:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:09.678 11:09:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:09.678 11:09:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:09.678 11:09:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:09.678 11:09:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:11.050 11:09:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:11.050 11:09:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:11.050 11:09:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:11.050 11:09:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:11.050 11:09:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:11.050 11:09:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:11.050 11:09:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:11.050 11:09:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:11.050 11:09:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:11.050 11:09:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:11.050 11:09:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:11.050 11:09:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:11.050 11:09:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:11.050 11:09:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:11.050 11:09:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:11.050 11:09:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:11.050 11:09:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:11.050 11:09:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:11.050 11:09:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:11.050 11:09:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:11.050 11:09:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:11.050 11:09:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:11.050 11:09:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:11.050 11:09:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:11.050 11:09:36 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:11.050 11:09:36 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:11.050 11:09:36 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:11.050 00:05:11.050 real 0m1.439s 00:05:11.050 user 0m1.303s 00:05:11.050 sys 0m0.139s 00:05:11.050 11:09:36 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.050 11:09:36 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:11.050 ************************************ 00:05:11.050 END TEST accel_dif_generate 00:05:11.050 ************************************ 00:05:11.050 11:09:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:11.050 11:09:36 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:11.050 11:09:36 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:11.050 11:09:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.050 11:09:36 accel -- common/autotest_common.sh@10 -- # set +x 00:05:11.050 ************************************ 00:05:11.050 START TEST accel_dif_generate_copy 00:05:11.050 ************************************ 00:05:11.050 11:09:36 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:05:11.050 11:09:36 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:11.050 11:09:36 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:11.050 11:09:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.050 11:09:36 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:11.050 11:09:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.050 11:09:36 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:11.050 11:09:36 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:11.050 11:09:36 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:11.050 11:09:36 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:11.050 11:09:36 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:11.050 11:09:36 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:11.050 11:09:36 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:11.050 11:09:36 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:11.050 11:09:36 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:11.050 [2024-07-12 11:09:36.895262] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:05:11.050 [2024-07-12 11:09:36.895325] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid463246 ] 00:05:11.050 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.050 [2024-07-12 11:09:36.951635] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.050 [2024-07-12 11:09:37.056948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.050 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:11.050 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.050 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.051 11:09:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.423 11:09:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:12.423 11:09:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.423 11:09:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.423 11:09:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.423 11:09:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:12.423 11:09:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.423 11:09:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.423 11:09:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.423 11:09:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:12.423 11:09:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.423 11:09:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.423 11:09:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.423 11:09:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:12.423 11:09:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.423 11:09:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.423 11:09:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.423 11:09:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:12.423 11:09:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.423 11:09:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.423 11:09:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.423 11:09:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:12.423 11:09:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.423 11:09:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.423 11:09:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.423 11:09:38 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:12.423 11:09:38 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:12.423 11:09:38 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:12.423 00:05:12.423 real 0m1.435s 00:05:12.423 user 0m1.302s 00:05:12.423 sys 0m0.135s 00:05:12.423 11:09:38 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.423 11:09:38 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:12.423 ************************************ 00:05:12.423 END TEST accel_dif_generate_copy 00:05:12.423 ************************************ 00:05:12.423 11:09:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:12.423 11:09:38 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:12.423 11:09:38 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:12.423 11:09:38 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:12.423 11:09:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.423 11:09:38 accel -- common/autotest_common.sh@10 -- # set +x 00:05:12.423 ************************************ 00:05:12.423 START TEST accel_comp 00:05:12.423 ************************************ 00:05:12.423 11:09:38 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:12.423 11:09:38 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:12.423 11:09:38 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:12.423 11:09:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.423 11:09:38 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:12.423 11:09:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.423 11:09:38 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:12.423 11:09:38 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:12.423 11:09:38 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:12.423 11:09:38 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:12.423 11:09:38 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:12.423 11:09:38 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:12.423 11:09:38 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:12.423 11:09:38 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:12.423 11:09:38 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:12.423 [2024-07-12 11:09:38.374773] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:05:12.423 [2024-07-12 11:09:38.374837] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid463515 ] 00:05:12.423 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.423 [2024-07-12 11:09:38.432096] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.423 [2024-07-12 11:09:38.538402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:12.682 11:09:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:14.056 11:09:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:14.056 11:09:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.056 11:09:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:14.056 11:09:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:14.056 11:09:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:14.057 11:09:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.057 11:09:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:14.057 11:09:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:14.057 11:09:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:14.057 11:09:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.057 11:09:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:14.057 11:09:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:14.057 11:09:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:14.057 11:09:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.057 11:09:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:14.057 11:09:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:14.057 11:09:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:14.057 11:09:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.057 11:09:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:14.057 11:09:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:14.057 11:09:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:14.057 11:09:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.057 11:09:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:14.057 11:09:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:14.057 11:09:39 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:14.057 11:09:39 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:14.057 11:09:39 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:14.057 00:05:14.057 real 0m1.443s 00:05:14.057 user 0m1.311s 00:05:14.057 sys 0m0.134s 00:05:14.057 11:09:39 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.057 11:09:39 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:14.057 ************************************ 00:05:14.057 END TEST accel_comp 00:05:14.057 ************************************ 00:05:14.057 11:09:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:14.057 11:09:39 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:14.057 11:09:39 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:14.057 11:09:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.057 11:09:39 accel -- common/autotest_common.sh@10 -- # set +x 00:05:14.057 ************************************ 00:05:14.057 START TEST accel_decomp 00:05:14.057 ************************************ 00:05:14.057 11:09:39 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:14.057 11:09:39 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:14.057 11:09:39 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:14.057 11:09:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.057 11:09:39 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:14.057 11:09:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.057 11:09:39 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:14.057 11:09:39 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:14.057 11:09:39 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:14.057 11:09:39 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:14.057 11:09:39 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:14.057 11:09:39 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:14.057 11:09:39 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:14.057 11:09:39 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:14.057 11:09:39 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:14.057 [2024-07-12 11:09:39.866688] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:05:14.057 [2024-07-12 11:09:39.866749] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid463681 ] 00:05:14.057 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.057 [2024-07-12 11:09:39.922721] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.057 [2024-07-12 11:09:40.028396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.057 11:09:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:15.429 11:09:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:15.430 11:09:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:15.430 11:09:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:15.430 11:09:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:15.430 11:09:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:15.430 11:09:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:15.430 11:09:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:15.430 11:09:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:15.430 11:09:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:15.430 11:09:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:15.430 11:09:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:15.430 11:09:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:15.430 11:09:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:15.430 11:09:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:15.430 11:09:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:15.430 11:09:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:15.430 11:09:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:15.430 11:09:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:15.430 11:09:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:15.430 11:09:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:15.430 11:09:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:15.430 11:09:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:15.430 11:09:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:15.430 11:09:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:15.430 11:09:41 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:15.430 11:09:41 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:15.430 11:09:41 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:15.430 00:05:15.430 real 0m1.448s 00:05:15.430 user 0m1.313s 00:05:15.430 sys 0m0.138s 00:05:15.430 11:09:41 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.430 11:09:41 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:15.430 ************************************ 00:05:15.430 END TEST accel_decomp 00:05:15.430 ************************************ 00:05:15.430 11:09:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:15.430 11:09:41 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:15.430 11:09:41 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:15.430 11:09:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.430 11:09:41 accel -- common/autotest_common.sh@10 -- # set +x 00:05:15.430 ************************************ 00:05:15.430 START TEST accel_decomp_full 00:05:15.430 ************************************ 00:05:15.430 11:09:41 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:15.430 11:09:41 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:05:15.430 11:09:41 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:05:15.430 11:09:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:15.430 11:09:41 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:15.430 11:09:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:15.430 11:09:41 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:15.430 11:09:41 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:05:15.430 11:09:41 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:15.430 11:09:41 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:15.430 11:09:41 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:15.430 11:09:41 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:15.430 11:09:41 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:15.430 11:09:41 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:05:15.430 11:09:41 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:05:15.430 [2024-07-12 11:09:41.364222] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:05:15.430 [2024-07-12 11:09:41.364290] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid463833 ] 00:05:15.430 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.430 [2024-07-12 11:09:41.425262] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.430 [2024-07-12 11:09:41.529400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:15.688 11:09:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:17.065 11:09:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:17.065 11:09:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:17.065 11:09:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:17.065 11:09:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:17.065 11:09:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:17.065 11:09:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:17.065 11:09:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:17.065 11:09:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:17.065 11:09:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:17.065 11:09:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:17.065 11:09:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:17.065 11:09:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:17.065 11:09:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:17.065 11:09:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:17.065 11:09:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:17.065 11:09:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:17.065 11:09:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:17.065 11:09:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:17.065 11:09:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:17.065 11:09:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:17.065 11:09:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:17.065 11:09:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:17.065 11:09:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:17.065 11:09:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:17.065 11:09:42 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:17.065 11:09:42 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:17.065 11:09:42 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:17.065 00:05:17.065 real 0m1.452s 00:05:17.065 user 0m1.315s 00:05:17.065 sys 0m0.138s 00:05:17.065 11:09:42 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.065 11:09:42 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:05:17.065 ************************************ 00:05:17.065 END TEST accel_decomp_full 00:05:17.065 ************************************ 00:05:17.065 11:09:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:17.065 11:09:42 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:17.065 11:09:42 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:17.065 11:09:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.065 11:09:42 accel -- common/autotest_common.sh@10 -- # set +x 00:05:17.065 ************************************ 00:05:17.065 START TEST accel_decomp_mcore 00:05:17.065 ************************************ 00:05:17.065 11:09:42 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:17.065 11:09:42 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:17.065 11:09:42 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:17.065 11:09:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.065 11:09:42 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:17.065 11:09:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.065 11:09:42 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:17.065 11:09:42 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:17.065 11:09:42 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:17.065 11:09:42 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:17.065 11:09:42 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:17.065 11:09:42 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:17.065 11:09:42 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:17.065 11:09:42 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:17.065 11:09:42 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:17.065 [2024-07-12 11:09:42.869124] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:05:17.065 [2024-07-12 11:09:42.869196] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid464072 ] 00:05:17.065 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.065 [2024-07-12 11:09:42.926962] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:17.065 [2024-07-12 11:09:43.034290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.065 [2024-07-12 11:09:43.034355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:17.065 [2024-07-12 11:09:43.034423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:17.065 [2024-07-12 11:09:43.034426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:17.065 11:09:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.439 11:09:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:18.439 11:09:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.439 11:09:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.439 11:09:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.439 11:09:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:18.439 11:09:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.439 11:09:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.439 11:09:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.439 11:09:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:18.439 11:09:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.439 11:09:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.439 11:09:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.439 11:09:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:18.439 11:09:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.439 11:09:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.439 11:09:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.439 11:09:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:18.439 11:09:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.439 11:09:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.439 11:09:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.439 11:09:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:18.439 11:09:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.439 11:09:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.439 11:09:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.439 11:09:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:18.439 11:09:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.439 11:09:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.439 11:09:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.439 11:09:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:18.439 11:09:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.439 11:09:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.439 11:09:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.439 11:09:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:18.439 11:09:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.439 11:09:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.439 11:09:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.439 11:09:44 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:18.439 11:09:44 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:18.439 11:09:44 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:18.439 00:05:18.439 real 0m1.448s 00:05:18.439 user 0m4.732s 00:05:18.439 sys 0m0.152s 00:05:18.439 11:09:44 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.439 11:09:44 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:18.439 ************************************ 00:05:18.439 END TEST accel_decomp_mcore 00:05:18.439 ************************************ 00:05:18.439 11:09:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:18.439 11:09:44 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:18.439 11:09:44 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:18.439 11:09:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.439 11:09:44 accel -- common/autotest_common.sh@10 -- # set +x 00:05:18.439 ************************************ 00:05:18.439 START TEST accel_decomp_full_mcore 00:05:18.439 ************************************ 00:05:18.439 11:09:44 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:18.439 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:18.439 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:18.439 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.439 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:18.439 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.439 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:18.439 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:18.439 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:18.439 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:18.439 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.439 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.439 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:18.439 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:18.439 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:18.439 [2024-07-12 11:09:44.359999] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:05:18.439 [2024-07-12 11:09:44.360060] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid464266 ] 00:05:18.439 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.439 [2024-07-12 11:09:44.417585] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:18.439 [2024-07-12 11:09:44.521571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.439 [2024-07-12 11:09:44.521678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:18.439 [2024-07-12 11:09:44.521757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:18.439 [2024-07-12 11:09:44.521760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.698 11:09:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.072 11:09:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:20.072 11:09:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.072 11:09:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.072 11:09:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.072 11:09:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:20.072 11:09:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.072 11:09:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.072 11:09:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.072 11:09:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:20.072 11:09:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.072 11:09:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.072 11:09:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.072 11:09:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:20.073 11:09:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.073 11:09:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.073 11:09:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.073 11:09:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:20.073 11:09:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.073 11:09:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.073 11:09:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.073 11:09:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:20.073 11:09:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.073 11:09:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.073 11:09:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.073 11:09:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:20.073 11:09:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.073 11:09:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.073 11:09:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.073 11:09:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:20.073 11:09:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.073 11:09:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.073 11:09:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.073 11:09:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:20.073 11:09:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:20.073 11:09:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:20.073 11:09:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:20.073 11:09:45 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:20.073 11:09:45 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:20.073 11:09:45 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:20.073 00:05:20.073 real 0m1.462s 00:05:20.073 user 0m4.805s 00:05:20.073 sys 0m0.140s 00:05:20.073 11:09:45 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.073 11:09:45 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:20.073 ************************************ 00:05:20.073 END TEST accel_decomp_full_mcore 00:05:20.073 ************************************ 00:05:20.073 11:09:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:20.073 11:09:45 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:20.073 11:09:45 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:20.073 11:09:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.073 11:09:45 accel -- common/autotest_common.sh@10 -- # set +x 00:05:20.073 ************************************ 00:05:20.073 START TEST accel_decomp_mthread 00:05:20.073 ************************************ 00:05:20.073 11:09:45 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:20.073 11:09:45 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:20.073 11:09:45 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:20.073 11:09:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.073 11:09:45 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:20.073 11:09:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.073 11:09:45 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:20.073 11:09:45 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:20.073 11:09:45 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:20.073 11:09:45 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:20.073 11:09:45 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.073 11:09:45 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.073 11:09:45 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:20.073 11:09:45 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:20.073 11:09:45 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:20.073 [2024-07-12 11:09:45.874439] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:05:20.073 [2024-07-12 11:09:45.874503] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid464433 ] 00:05:20.073 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.073 [2024-07-12 11:09:45.931942] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.073 [2024-07-12 11:09:46.034954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:20.073 11:09:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.446 11:09:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:21.446 11:09:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.446 11:09:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.446 11:09:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.446 11:09:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:21.446 11:09:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.446 11:09:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.446 11:09:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.446 11:09:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:21.446 11:09:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.446 11:09:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.446 11:09:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.446 11:09:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:21.446 11:09:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.446 11:09:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.446 11:09:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.446 11:09:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:21.446 11:09:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.446 11:09:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.446 11:09:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.446 11:09:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:21.446 11:09:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.446 11:09:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.446 11:09:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.446 11:09:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:21.446 11:09:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.446 11:09:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.446 11:09:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.446 11:09:47 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:21.446 11:09:47 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:21.446 11:09:47 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:21.446 00:05:21.446 real 0m1.444s 00:05:21.446 user 0m1.305s 00:05:21.446 sys 0m0.141s 00:05:21.446 11:09:47 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.446 11:09:47 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:21.446 ************************************ 00:05:21.446 END TEST accel_decomp_mthread 00:05:21.446 ************************************ 00:05:21.446 11:09:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:21.446 11:09:47 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:21.446 11:09:47 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:21.446 11:09:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.446 11:09:47 accel -- common/autotest_common.sh@10 -- # set +x 00:05:21.446 ************************************ 00:05:21.446 START TEST accel_decomp_full_mthread 00:05:21.446 ************************************ 00:05:21.446 11:09:47 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:21.446 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:21.446 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:21.446 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.446 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.446 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:21.446 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:21.447 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:21.447 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:21.447 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:21.447 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:21.447 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:21.447 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:21.447 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:21.447 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:21.447 [2024-07-12 11:09:47.367002] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:05:21.447 [2024-07-12 11:09:47.367070] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid464599 ] 00:05:21.447 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.447 [2024-07-12 11:09:47.423196] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.447 [2024-07-12 11:09:47.528028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.705 11:09:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:23.080 11:09:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:23.080 11:09:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:23.080 11:09:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:23.080 11:09:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:23.080 11:09:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:23.080 11:09:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:23.080 11:09:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:23.080 11:09:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:23.080 11:09:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:23.080 11:09:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:23.080 11:09:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:23.080 11:09:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:23.080 11:09:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:23.080 11:09:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:23.080 11:09:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:23.080 11:09:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:23.080 11:09:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:23.080 11:09:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:23.080 11:09:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:23.080 11:09:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:23.080 11:09:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:23.080 11:09:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:23.080 11:09:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:23.080 11:09:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:23.080 11:09:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:23.080 11:09:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:23.080 11:09:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:23.080 11:09:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:23.080 11:09:48 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:23.080 11:09:48 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:23.080 11:09:48 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:23.080 00:05:23.080 real 0m1.462s 00:05:23.080 user 0m1.337s 00:05:23.080 sys 0m0.127s 00:05:23.080 11:09:48 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.080 11:09:48 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:23.080 ************************************ 00:05:23.080 END TEST accel_decomp_full_mthread 00:05:23.080 ************************************ 00:05:23.080 11:09:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:23.080 11:09:48 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:23.080 11:09:48 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:23.080 11:09:48 accel -- accel/accel.sh@137 -- # build_accel_config 00:05:23.080 11:09:48 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:23.080 11:09:48 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:23.080 11:09:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.080 11:09:48 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:23.080 11:09:48 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.080 11:09:48 accel -- common/autotest_common.sh@10 -- # set +x 00:05:23.080 11:09:48 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.080 11:09:48 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:23.080 11:09:48 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:23.080 11:09:48 accel -- accel/accel.sh@41 -- # jq -r . 00:05:23.080 ************************************ 00:05:23.080 START TEST accel_dif_functional_tests 00:05:23.080 ************************************ 00:05:23.080 11:09:48 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:23.080 [2024-07-12 11:09:48.896184] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:05:23.080 [2024-07-12 11:09:48.896244] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid464864 ] 00:05:23.080 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.080 [2024-07-12 11:09:48.951177] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:23.080 [2024-07-12 11:09:49.067379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.080 [2024-07-12 11:09:49.067445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:23.080 [2024-07-12 11:09:49.067448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.080 00:05:23.080 00:05:23.080 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.080 http://cunit.sourceforge.net/ 00:05:23.080 00:05:23.080 00:05:23.080 Suite: accel_dif 00:05:23.080 Test: verify: DIF generated, GUARD check ...passed 00:05:23.080 Test: verify: DIF generated, APPTAG check ...passed 00:05:23.080 Test: verify: DIF generated, REFTAG check ...passed 00:05:23.080 Test: verify: DIF not generated, GUARD check ...[2024-07-12 11:09:49.165101] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:23.080 passed 00:05:23.080 Test: verify: DIF not generated, APPTAG check ...[2024-07-12 11:09:49.165185] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:23.080 passed 00:05:23.080 Test: verify: DIF not generated, REFTAG check ...[2024-07-12 11:09:49.165218] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:23.080 passed 00:05:23.080 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:23.080 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-12 11:09:49.165295] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:23.080 passed 00:05:23.080 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:23.080 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:23.080 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:23.080 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-12 11:09:49.165433] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:23.080 passed 00:05:23.080 Test: verify copy: DIF generated, GUARD check ...passed 00:05:23.080 Test: verify copy: DIF generated, APPTAG check ...passed 00:05:23.080 Test: verify copy: DIF generated, REFTAG check ...passed 00:05:23.080 Test: verify copy: DIF not generated, GUARD check ...[2024-07-12 11:09:49.165615] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:23.080 passed 00:05:23.080 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-12 11:09:49.165669] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:23.080 passed 00:05:23.080 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-12 11:09:49.165703] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:23.080 passed 00:05:23.080 Test: generate copy: DIF generated, GUARD check ...passed 00:05:23.080 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:23.080 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:23.080 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:23.080 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:23.080 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:23.080 Test: generate copy: iovecs-len validate ...[2024-07-12 11:09:49.165972] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:23.080 passed 00:05:23.080 Test: generate copy: buffer alignment validate ...passed 00:05:23.080 00:05:23.080 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.080 suites 1 1 n/a 0 0 00:05:23.080 tests 26 26 26 0 0 00:05:23.080 asserts 115 115 115 0 n/a 00:05:23.080 00:05:23.080 Elapsed time = 0.003 seconds 00:05:23.339 00:05:23.339 real 0m0.554s 00:05:23.339 user 0m0.851s 00:05:23.339 sys 0m0.180s 00:05:23.339 11:09:49 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.339 11:09:49 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:05:23.339 ************************************ 00:05:23.339 END TEST accel_dif_functional_tests 00:05:23.339 ************************************ 00:05:23.339 11:09:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:23.339 00:05:23.339 real 0m32.592s 00:05:23.339 user 0m36.184s 00:05:23.339 sys 0m4.394s 00:05:23.339 11:09:49 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.339 11:09:49 accel -- common/autotest_common.sh@10 -- # set +x 00:05:23.339 ************************************ 00:05:23.339 END TEST accel 00:05:23.339 ************************************ 00:05:23.339 11:09:49 -- common/autotest_common.sh@1142 -- # return 0 00:05:23.339 11:09:49 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:23.339 11:09:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.339 11:09:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.339 11:09:49 -- common/autotest_common.sh@10 -- # set +x 00:05:23.598 ************************************ 00:05:23.598 START TEST accel_rpc 00:05:23.598 ************************************ 00:05:23.598 11:09:49 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:23.598 * Looking for test storage... 00:05:23.598 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:23.598 11:09:49 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:23.598 11:09:49 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=464930 00:05:23.598 11:09:49 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:23.598 11:09:49 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 464930 00:05:23.598 11:09:49 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 464930 ']' 00:05:23.598 11:09:49 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.598 11:09:49 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.598 11:09:49 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.598 11:09:49 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.598 11:09:49 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.598 [2024-07-12 11:09:49.589808] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:05:23.598 [2024-07-12 11:09:49.589910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid464930 ] 00:05:23.598 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.598 [2024-07-12 11:09:49.651041] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.856 [2024-07-12 11:09:49.760923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.856 11:09:49 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.856 11:09:49 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:23.856 11:09:49 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:23.856 11:09:49 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:23.856 11:09:49 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:23.856 11:09:49 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:23.856 11:09:49 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:23.856 11:09:49 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.856 11:09:49 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.856 11:09:49 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.856 ************************************ 00:05:23.856 START TEST accel_assign_opcode 00:05:23.856 ************************************ 00:05:23.856 11:09:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:05:23.856 11:09:49 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:23.856 11:09:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.857 11:09:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:23.857 [2024-07-12 11:09:49.821540] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:23.857 11:09:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.857 11:09:49 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:23.857 11:09:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.857 11:09:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:23.857 [2024-07-12 11:09:49.829548] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:23.857 11:09:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.857 11:09:49 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:23.857 11:09:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.857 11:09:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:24.115 11:09:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.115 11:09:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:24.115 11:09:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.115 11:09:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:24.115 11:09:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:24.115 11:09:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:05:24.115 11:09:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.115 software 00:05:24.115 00:05:24.115 real 0m0.289s 00:05:24.115 user 0m0.039s 00:05:24.115 sys 0m0.007s 00:05:24.115 11:09:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.115 11:09:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:24.115 ************************************ 00:05:24.115 END TEST accel_assign_opcode 00:05:24.115 ************************************ 00:05:24.115 11:09:50 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:24.115 11:09:50 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 464930 00:05:24.115 11:09:50 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 464930 ']' 00:05:24.115 11:09:50 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 464930 00:05:24.115 11:09:50 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:05:24.115 11:09:50 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:24.115 11:09:50 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 464930 00:05:24.115 11:09:50 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:24.115 11:09:50 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:24.115 11:09:50 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 464930' 00:05:24.115 killing process with pid 464930 00:05:24.115 11:09:50 accel_rpc -- common/autotest_common.sh@967 -- # kill 464930 00:05:24.115 11:09:50 accel_rpc -- common/autotest_common.sh@972 -- # wait 464930 00:05:24.680 00:05:24.680 real 0m1.096s 00:05:24.680 user 0m1.050s 00:05:24.680 sys 0m0.403s 00:05:24.680 11:09:50 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.680 11:09:50 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.680 ************************************ 00:05:24.680 END TEST accel_rpc 00:05:24.680 ************************************ 00:05:24.680 11:09:50 -- common/autotest_common.sh@1142 -- # return 0 00:05:24.680 11:09:50 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:24.680 11:09:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.680 11:09:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.680 11:09:50 -- common/autotest_common.sh@10 -- # set +x 00:05:24.680 ************************************ 00:05:24.680 START TEST app_cmdline 00:05:24.680 ************************************ 00:05:24.680 11:09:50 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:24.680 * Looking for test storage... 00:05:24.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:24.680 11:09:50 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:24.680 11:09:50 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=465140 00:05:24.680 11:09:50 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:24.680 11:09:50 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 465140 00:05:24.680 11:09:50 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 465140 ']' 00:05:24.680 11:09:50 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.680 11:09:50 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.680 11:09:50 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.680 11:09:50 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.680 11:09:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:24.680 [2024-07-12 11:09:50.735975] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:05:24.680 [2024-07-12 11:09:50.736068] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid465140 ] 00:05:24.680 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.680 [2024-07-12 11:09:50.793988] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.937 [2024-07-12 11:09:50.902526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.219 11:09:51 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.219 11:09:51 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:05:25.219 11:09:51 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:25.475 { 00:05:25.475 "version": "SPDK v24.09-pre git sha1 a7a09b9a0", 00:05:25.475 "fields": { 00:05:25.475 "major": 24, 00:05:25.475 "minor": 9, 00:05:25.475 "patch": 0, 00:05:25.475 "suffix": "-pre", 00:05:25.475 "commit": "a7a09b9a0" 00:05:25.475 } 00:05:25.475 } 00:05:25.475 11:09:51 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:25.475 11:09:51 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:25.475 11:09:51 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:25.475 11:09:51 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:25.475 11:09:51 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:25.475 11:09:51 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.475 11:09:51 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:25.475 11:09:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:25.475 11:09:51 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:25.475 11:09:51 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.475 11:09:51 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:25.475 11:09:51 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:25.475 11:09:51 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:25.475 11:09:51 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:05:25.475 11:09:51 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:25.475 11:09:51 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:25.475 11:09:51 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.475 11:09:51 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:25.475 11:09:51 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.475 11:09:51 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:25.475 11:09:51 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.475 11:09:51 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:25.475 11:09:51 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:25.475 11:09:51 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:25.732 request: 00:05:25.732 { 00:05:25.732 "method": "env_dpdk_get_mem_stats", 00:05:25.732 "req_id": 1 00:05:25.732 } 00:05:25.732 Got JSON-RPC error response 00:05:25.732 response: 00:05:25.732 { 00:05:25.732 "code": -32601, 00:05:25.732 "message": "Method not found" 00:05:25.732 } 00:05:25.732 11:09:51 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:05:25.732 11:09:51 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:25.732 11:09:51 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:25.732 11:09:51 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:25.732 11:09:51 app_cmdline -- app/cmdline.sh@1 -- # killprocess 465140 00:05:25.732 11:09:51 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 465140 ']' 00:05:25.732 11:09:51 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 465140 00:05:25.732 11:09:51 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:05:25.732 11:09:51 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:25.732 11:09:51 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 465140 00:05:25.732 11:09:51 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:25.732 11:09:51 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:25.732 11:09:51 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 465140' 00:05:25.732 killing process with pid 465140 00:05:25.732 11:09:51 app_cmdline -- common/autotest_common.sh@967 -- # kill 465140 00:05:25.732 11:09:51 app_cmdline -- common/autotest_common.sh@972 -- # wait 465140 00:05:26.299 00:05:26.299 real 0m1.496s 00:05:26.299 user 0m1.823s 00:05:26.299 sys 0m0.450s 00:05:26.299 11:09:52 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.299 11:09:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:26.299 ************************************ 00:05:26.299 END TEST app_cmdline 00:05:26.299 ************************************ 00:05:26.299 11:09:52 -- common/autotest_common.sh@1142 -- # return 0 00:05:26.299 11:09:52 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:26.299 11:09:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.299 11:09:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.299 11:09:52 -- common/autotest_common.sh@10 -- # set +x 00:05:26.299 ************************************ 00:05:26.299 START TEST version 00:05:26.299 ************************************ 00:05:26.299 11:09:52 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:26.299 * Looking for test storage... 00:05:26.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:26.299 11:09:52 version -- app/version.sh@17 -- # get_header_version major 00:05:26.299 11:09:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:26.299 11:09:52 version -- app/version.sh@14 -- # cut -f2 00:05:26.299 11:09:52 version -- app/version.sh@14 -- # tr -d '"' 00:05:26.299 11:09:52 version -- app/version.sh@17 -- # major=24 00:05:26.299 11:09:52 version -- app/version.sh@18 -- # get_header_version minor 00:05:26.299 11:09:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:26.299 11:09:52 version -- app/version.sh@14 -- # cut -f2 00:05:26.299 11:09:52 version -- app/version.sh@14 -- # tr -d '"' 00:05:26.299 11:09:52 version -- app/version.sh@18 -- # minor=9 00:05:26.299 11:09:52 version -- app/version.sh@19 -- # get_header_version patch 00:05:26.299 11:09:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:26.299 11:09:52 version -- app/version.sh@14 -- # cut -f2 00:05:26.299 11:09:52 version -- app/version.sh@14 -- # tr -d '"' 00:05:26.299 11:09:52 version -- app/version.sh@19 -- # patch=0 00:05:26.299 11:09:52 version -- app/version.sh@20 -- # get_header_version suffix 00:05:26.299 11:09:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:26.299 11:09:52 version -- app/version.sh@14 -- # cut -f2 00:05:26.299 11:09:52 version -- app/version.sh@14 -- # tr -d '"' 00:05:26.299 11:09:52 version -- app/version.sh@20 -- # suffix=-pre 00:05:26.299 11:09:52 version -- app/version.sh@22 -- # version=24.9 00:05:26.299 11:09:52 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:26.299 11:09:52 version -- app/version.sh@28 -- # version=24.9rc0 00:05:26.299 11:09:52 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:26.299 11:09:52 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:26.299 11:09:52 version -- app/version.sh@30 -- # py_version=24.9rc0 00:05:26.299 11:09:52 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:05:26.299 00:05:26.299 real 0m0.098s 00:05:26.299 user 0m0.049s 00:05:26.299 sys 0m0.070s 00:05:26.299 11:09:52 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.299 11:09:52 version -- common/autotest_common.sh@10 -- # set +x 00:05:26.299 ************************************ 00:05:26.299 END TEST version 00:05:26.299 ************************************ 00:05:26.299 11:09:52 -- common/autotest_common.sh@1142 -- # return 0 00:05:26.299 11:09:52 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:05:26.299 11:09:52 -- spdk/autotest.sh@198 -- # uname -s 00:05:26.299 11:09:52 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:05:26.299 11:09:52 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:05:26.299 11:09:52 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:05:26.299 11:09:52 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:05:26.299 11:09:52 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:26.299 11:09:52 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:26.299 11:09:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:26.299 11:09:52 -- common/autotest_common.sh@10 -- # set +x 00:05:26.299 11:09:52 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:26.299 11:09:52 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:05:26.299 11:09:52 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:05:26.299 11:09:52 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:05:26.299 11:09:52 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:05:26.299 11:09:52 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:05:26.299 11:09:52 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:26.299 11:09:52 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:26.299 11:09:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.299 11:09:52 -- common/autotest_common.sh@10 -- # set +x 00:05:26.299 ************************************ 00:05:26.299 START TEST nvmf_tcp 00:05:26.299 ************************************ 00:05:26.299 11:09:52 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:26.299 * Looking for test storage... 00:05:26.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:26.299 11:09:52 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:26.299 11:09:52 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:26.299 11:09:52 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:26.299 11:09:52 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:05:26.299 11:09:52 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:26.299 11:09:52 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:26.299 11:09:52 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:26.299 11:09:52 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:26.299 11:09:52 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:26.299 11:09:52 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:26.299 11:09:52 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:26.299 11:09:52 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:26.299 11:09:52 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:26.299 11:09:52 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:26.299 11:09:52 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:26.299 11:09:52 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:26.299 11:09:52 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:26.299 11:09:52 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:26.299 11:09:52 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:26.299 11:09:52 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:26.299 11:09:52 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:26.299 11:09:52 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:26.299 11:09:52 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:26.299 11:09:52 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:26.299 11:09:52 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.299 11:09:52 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.300 11:09:52 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.300 11:09:52 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:05:26.300 11:09:52 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.300 11:09:52 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:05:26.300 11:09:52 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:26.300 11:09:52 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:26.300 11:09:52 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:26.300 11:09:52 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:26.300 11:09:52 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:26.300 11:09:52 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:26.300 11:09:52 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:26.300 11:09:52 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:26.300 11:09:52 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:26.300 11:09:52 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:05:26.300 11:09:52 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:05:26.300 11:09:52 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:26.300 11:09:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:26.300 11:09:52 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:05:26.300 11:09:52 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:26.300 11:09:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:26.300 11:09:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.300 11:09:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:26.558 ************************************ 00:05:26.558 START TEST nvmf_example 00:05:26.558 ************************************ 00:05:26.558 11:09:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:26.558 * Looking for test storage... 00:05:26.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:26.558 11:09:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:26.558 11:09:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:05:26.558 11:09:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:26.558 11:09:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:26.558 11:09:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:26.558 11:09:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:26.558 11:09:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:26.558 11:09:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:26.558 11:09:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:26.558 11:09:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:26.558 11:09:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:26.558 11:09:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:26.558 11:09:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:26.558 11:09:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:26.558 11:09:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:26.558 11:09:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:26.558 11:09:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:26.558 11:09:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:26.558 11:09:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:26.558 11:09:52 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:26.558 11:09:52 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:26.558 11:09:52 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:26.558 11:09:52 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.558 11:09:52 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.558 11:09:52 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.558 11:09:52 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:05:26.558 11:09:52 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.558 11:09:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:05:26.558 11:09:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:26.558 11:09:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:26.558 11:09:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:26.558 11:09:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:26.558 11:09:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:26.558 11:09:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:26.558 11:09:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:26.558 11:09:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:26.558 11:09:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:05:26.558 11:09:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:05:26.558 11:09:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:05:26.559 11:09:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:05:26.559 11:09:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:05:26.559 11:09:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:05:26.559 11:09:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:05:26.559 11:09:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:05:26.559 11:09:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:26.559 11:09:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:26.559 11:09:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:05:26.559 11:09:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:26.559 11:09:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:26.559 11:09:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:26.559 11:09:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:26.559 11:09:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:26.559 11:09:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:26.559 11:09:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:26.559 11:09:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:26.559 11:09:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:26.559 11:09:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:26.559 11:09:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:05:26.559 11:09:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:28.458 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:28.458 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:05:28.458 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:28.458 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:28.458 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:28.458 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:28.458 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:28.458 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:05:28.458 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:28.458 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:05:28.458 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:05:28.458 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:05:28.458 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:05:28.458 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:05:28.458 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:05:28.458 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:28.458 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:28.458 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:28.458 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:28.458 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:28.458 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:28.458 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:28.458 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:28.458 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:28.459 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:28.459 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:28.459 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:28.459 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:28.459 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:28.717 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:28.717 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:28.717 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:28.717 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:28.717 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:28.717 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:28.717 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:28.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:28.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:05:28.717 00:05:28.717 --- 10.0.0.2 ping statistics --- 00:05:28.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:28.717 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:05:28.717 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:28.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:28.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:05:28.717 00:05:28.717 --- 10.0.0.1 ping statistics --- 00:05:28.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:28.717 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:05:28.717 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:28.717 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:05:28.717 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:28.717 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:28.717 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:28.717 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:28.717 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:28.717 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:28.717 11:09:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:28.717 11:09:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:05:28.717 11:09:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:05:28.717 11:09:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:28.717 11:09:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:28.717 11:09:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:05:28.717 11:09:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:05:28.717 11:09:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=467162 00:05:28.718 11:09:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:05:28.718 11:09:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:05:28.718 11:09:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 467162 00:05:28.718 11:09:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 467162 ']' 00:05:28.718 11:09:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.718 11:09:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.718 11:09:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.718 11:09:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.718 11:09:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:28.718 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.675 11:09:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.675 11:09:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:05:29.675 11:09:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:05:29.675 11:09:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:29.675 11:09:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:29.675 11:09:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:29.675 11:09:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.675 11:09:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:29.675 11:09:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.675 11:09:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:05:29.675 11:09:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.675 11:09:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:29.675 11:09:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.675 11:09:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:05:29.675 11:09:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:29.675 11:09:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.675 11:09:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:29.675 11:09:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.675 11:09:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:05:29.675 11:09:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:05:29.675 11:09:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.675 11:09:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:29.675 11:09:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.972 11:09:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:29.972 11:09:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.972 11:09:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:29.972 11:09:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.972 11:09:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:05:29.972 11:09:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:05:29.972 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.193 Initializing NVMe Controllers 00:05:42.193 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:42.193 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:42.193 Initialization complete. Launching workers. 00:05:42.193 ======================================================== 00:05:42.193 Latency(us) 00:05:42.193 Device Information : IOPS MiB/s Average min max 00:05:42.193 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14522.79 56.73 4406.62 865.30 46040.01 00:05:42.193 ======================================================== 00:05:42.194 Total : 14522.79 56.73 4406.62 865.30 46040.01 00:05:42.194 00:05:42.194 11:10:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:05:42.194 11:10:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:05:42.194 11:10:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:05:42.194 11:10:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:05:42.194 11:10:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:05:42.194 11:10:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:05:42.194 11:10:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:05:42.194 11:10:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:05:42.194 rmmod nvme_tcp 00:05:42.194 rmmod nvme_fabrics 00:05:42.194 rmmod nvme_keyring 00:05:42.194 11:10:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:05:42.194 11:10:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:05:42.194 11:10:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:05:42.194 11:10:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 467162 ']' 00:05:42.194 11:10:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 467162 00:05:42.194 11:10:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 467162 ']' 00:05:42.194 11:10:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 467162 00:05:42.194 11:10:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:05:42.194 11:10:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:42.194 11:10:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 467162 00:05:42.194 11:10:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:05:42.194 11:10:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:05:42.194 11:10:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 467162' 00:05:42.194 killing process with pid 467162 00:05:42.194 11:10:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 467162 00:05:42.194 11:10:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 467162 00:05:42.194 nvmf threads initialize successfully 00:05:42.194 bdev subsystem init successfully 00:05:42.194 created a nvmf target service 00:05:42.194 create targets's poll groups done 00:05:42.194 all subsystems of target started 00:05:42.194 nvmf target is running 00:05:42.194 all subsystems of target stopped 00:05:42.194 destroy targets's poll groups done 00:05:42.194 destroyed the nvmf target service 00:05:42.194 bdev subsystem finish successfully 00:05:42.194 nvmf threads destroy successfully 00:05:42.194 11:10:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:05:42.194 11:10:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:05:42.194 11:10:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:05:42.194 11:10:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:05:42.194 11:10:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:05:42.194 11:10:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:42.194 11:10:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:42.194 11:10:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:42.452 11:10:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:05:42.452 11:10:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:05:42.452 11:10:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:42.452 11:10:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:42.452 00:05:42.452 real 0m16.098s 00:05:42.452 user 0m44.802s 00:05:42.452 sys 0m3.762s 00:05:42.452 11:10:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.452 11:10:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:42.452 ************************************ 00:05:42.452 END TEST nvmf_example 00:05:42.452 ************************************ 00:05:42.452 11:10:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:05:42.452 11:10:08 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:05:42.452 11:10:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:42.452 11:10:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.452 11:10:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:42.713 ************************************ 00:05:42.713 START TEST nvmf_filesystem 00:05:42.713 ************************************ 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:05:42.713 * Looking for test storage... 00:05:42.713 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:05:42.713 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:05:42.714 #define SPDK_CONFIG_H 00:05:42.714 #define SPDK_CONFIG_APPS 1 00:05:42.714 #define SPDK_CONFIG_ARCH native 00:05:42.714 #undef SPDK_CONFIG_ASAN 00:05:42.714 #undef SPDK_CONFIG_AVAHI 00:05:42.714 #undef SPDK_CONFIG_CET 00:05:42.714 #define SPDK_CONFIG_COVERAGE 1 00:05:42.714 #define SPDK_CONFIG_CROSS_PREFIX 00:05:42.714 #undef SPDK_CONFIG_CRYPTO 00:05:42.714 #undef SPDK_CONFIG_CRYPTO_MLX5 00:05:42.714 #undef SPDK_CONFIG_CUSTOMOCF 00:05:42.714 #undef SPDK_CONFIG_DAOS 00:05:42.714 #define SPDK_CONFIG_DAOS_DIR 00:05:42.714 #define SPDK_CONFIG_DEBUG 1 00:05:42.714 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:05:42.714 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:42.714 #define SPDK_CONFIG_DPDK_INC_DIR 00:05:42.714 #define SPDK_CONFIG_DPDK_LIB_DIR 00:05:42.714 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:05:42.714 #undef SPDK_CONFIG_DPDK_UADK 00:05:42.714 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:42.714 #define SPDK_CONFIG_EXAMPLES 1 00:05:42.714 #undef SPDK_CONFIG_FC 00:05:42.714 #define SPDK_CONFIG_FC_PATH 00:05:42.714 #define SPDK_CONFIG_FIO_PLUGIN 1 00:05:42.714 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:05:42.714 #undef SPDK_CONFIG_FUSE 00:05:42.714 #undef SPDK_CONFIG_FUZZER 00:05:42.714 #define SPDK_CONFIG_FUZZER_LIB 00:05:42.714 #undef SPDK_CONFIG_GOLANG 00:05:42.714 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:05:42.714 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:05:42.714 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:05:42.714 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:05:42.714 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:05:42.714 #undef SPDK_CONFIG_HAVE_LIBBSD 00:05:42.714 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:05:42.714 #define SPDK_CONFIG_IDXD 1 00:05:42.714 #define SPDK_CONFIG_IDXD_KERNEL 1 00:05:42.714 #undef SPDK_CONFIG_IPSEC_MB 00:05:42.714 #define SPDK_CONFIG_IPSEC_MB_DIR 00:05:42.714 #define SPDK_CONFIG_ISAL 1 00:05:42.714 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:05:42.714 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:05:42.714 #define SPDK_CONFIG_LIBDIR 00:05:42.714 #undef SPDK_CONFIG_LTO 00:05:42.714 #define SPDK_CONFIG_MAX_LCORES 128 00:05:42.714 #define SPDK_CONFIG_NVME_CUSE 1 00:05:42.714 #undef SPDK_CONFIG_OCF 00:05:42.714 #define SPDK_CONFIG_OCF_PATH 00:05:42.714 #define SPDK_CONFIG_OPENSSL_PATH 00:05:42.714 #undef SPDK_CONFIG_PGO_CAPTURE 00:05:42.714 #define SPDK_CONFIG_PGO_DIR 00:05:42.714 #undef SPDK_CONFIG_PGO_USE 00:05:42.714 #define SPDK_CONFIG_PREFIX /usr/local 00:05:42.714 #undef SPDK_CONFIG_RAID5F 00:05:42.714 #undef SPDK_CONFIG_RBD 00:05:42.714 #define SPDK_CONFIG_RDMA 1 00:05:42.714 #define SPDK_CONFIG_RDMA_PROV verbs 00:05:42.714 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:05:42.714 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:05:42.714 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:05:42.714 #define SPDK_CONFIG_SHARED 1 00:05:42.714 #undef SPDK_CONFIG_SMA 00:05:42.714 #define SPDK_CONFIG_TESTS 1 00:05:42.714 #undef SPDK_CONFIG_TSAN 00:05:42.714 #define SPDK_CONFIG_UBLK 1 00:05:42.714 #define SPDK_CONFIG_UBSAN 1 00:05:42.714 #undef SPDK_CONFIG_UNIT_TESTS 00:05:42.714 #undef SPDK_CONFIG_URING 00:05:42.714 #define SPDK_CONFIG_URING_PATH 00:05:42.714 #undef SPDK_CONFIG_URING_ZNS 00:05:42.714 #undef SPDK_CONFIG_USDT 00:05:42.714 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:05:42.714 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:05:42.714 #define SPDK_CONFIG_VFIO_USER 1 00:05:42.714 #define SPDK_CONFIG_VFIO_USER_DIR 00:05:42.714 #define SPDK_CONFIG_VHOST 1 00:05:42.714 #define SPDK_CONFIG_VIRTIO 1 00:05:42.714 #undef SPDK_CONFIG_VTUNE 00:05:42.714 #define SPDK_CONFIG_VTUNE_DIR 00:05:42.714 #define SPDK_CONFIG_WERROR 1 00:05:42.714 #define SPDK_CONFIG_WPDK_DIR 00:05:42.714 #undef SPDK_CONFIG_XNVME 00:05:42.714 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:05:42.714 11:10:08 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:05:42.715 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 468870 ]] 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 468870 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.yl7xKn 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.yl7xKn/tests/target /tmp/spdk.yl7xKn 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:05:42.716 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=953643008 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330786816 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=56609984512 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994708992 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5384724480 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30993772544 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997352448 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3579904 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12390182912 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398944256 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8761344 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30997020672 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997356544 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=335872 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199463936 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199468032 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:05:42.717 * Looking for test storage... 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=56609984512 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=7599316992 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:42.717 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:42.717 11:10:08 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.718 11:10:08 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.718 11:10:08 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.718 11:10:08 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:05:42.718 11:10:08 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.718 11:10:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:05:42.718 11:10:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:42.718 11:10:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:42.718 11:10:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:42.718 11:10:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:42.718 11:10:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:42.718 11:10:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:42.718 11:10:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:42.718 11:10:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:42.718 11:10:08 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:05:42.718 11:10:08 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:05:42.718 11:10:08 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:05:42.718 11:10:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:42.718 11:10:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:42.718 11:10:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:42.718 11:10:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:42.718 11:10:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:42.718 11:10:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:42.718 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:42.718 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:42.718 11:10:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:42.718 11:10:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:42.718 11:10:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:05:42.718 11:10:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:05:45.254 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:45.254 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:05:45.254 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:45.254 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:45.254 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:45.254 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:45.254 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:45.254 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:05:45.254 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:45.254 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:05:45.254 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:05:45.254 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:05:45.254 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:05:45.254 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:05:45.254 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:05:45.254 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:45.254 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:45.254 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:45.254 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:45.254 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:45.254 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:45.254 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:45.254 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:45.254 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:45.254 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:45.254 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:45.254 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:45.254 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:45.254 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:45.254 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:45.254 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:45.254 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:45.254 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:45.255 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:45.255 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:45.255 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:45.255 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:45.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:45.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:05:45.255 00:05:45.255 --- 10.0.0.2 ping statistics --- 00:05:45.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:45.255 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:05:45.255 11:10:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:45.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:45.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:05:45.255 00:05:45.255 --- 10.0.0.1 ping statistics --- 00:05:45.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:45.255 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:05:45.255 11:10:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:45.255 11:10:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:05:45.255 11:10:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:45.255 11:10:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:45.255 11:10:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:45.255 11:10:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:45.255 11:10:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:45.255 11:10:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:45.255 11:10:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:45.255 11:10:11 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:05:45.255 11:10:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:45.255 11:10:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.255 11:10:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:05:45.255 ************************************ 00:05:45.255 START TEST nvmf_filesystem_no_in_capsule 00:05:45.255 ************************************ 00:05:45.255 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:05:45.255 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:05:45.255 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:05:45.255 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:05:45.255 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:45.255 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:45.255 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=470501 00:05:45.255 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:05:45.255 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 470501 00:05:45.255 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 470501 ']' 00:05:45.255 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.255 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.255 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.255 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.255 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:45.255 [2024-07-12 11:10:11.105544] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:05:45.255 [2024-07-12 11:10:11.105644] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:45.255 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.255 [2024-07-12 11:10:11.171517] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:45.255 [2024-07-12 11:10:11.283523] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:45.255 [2024-07-12 11:10:11.283572] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:45.255 [2024-07-12 11:10:11.283601] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:45.255 [2024-07-12 11:10:11.283612] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:45.255 [2024-07-12 11:10:11.283622] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:45.255 [2024-07-12 11:10:11.283709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.255 [2024-07-12 11:10:11.283743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:45.256 [2024-07-12 11:10:11.283798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:45.256 [2024-07-12 11:10:11.283801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.516 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.516 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:05:45.516 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:05:45.516 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:45.516 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:45.516 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:45.516 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:05:45.516 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:05:45.516 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.516 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:45.516 [2024-07-12 11:10:11.442034] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:45.516 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.516 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:05:45.516 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.516 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:45.516 Malloc1 00:05:45.516 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.516 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:05:45.516 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.516 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:45.516 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.516 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:05:45.516 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.516 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:45.516 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.516 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:45.516 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.516 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:45.516 [2024-07-12 11:10:11.626169] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:45.516 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.516 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:05:45.516 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:05:45.516 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:05:45.516 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:05:45.516 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:05:45.516 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:05:45.516 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.516 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:45.516 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.516 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:05:45.516 { 00:05:45.516 "name": "Malloc1", 00:05:45.516 "aliases": [ 00:05:45.516 "d7582a0f-e008-4f77-9bea-d7307b6c508c" 00:05:45.516 ], 00:05:45.516 "product_name": "Malloc disk", 00:05:45.516 "block_size": 512, 00:05:45.516 "num_blocks": 1048576, 00:05:45.516 "uuid": "d7582a0f-e008-4f77-9bea-d7307b6c508c", 00:05:45.516 "assigned_rate_limits": { 00:05:45.516 "rw_ios_per_sec": 0, 00:05:45.516 "rw_mbytes_per_sec": 0, 00:05:45.516 "r_mbytes_per_sec": 0, 00:05:45.516 "w_mbytes_per_sec": 0 00:05:45.516 }, 00:05:45.516 "claimed": true, 00:05:45.516 "claim_type": "exclusive_write", 00:05:45.516 "zoned": false, 00:05:45.516 "supported_io_types": { 00:05:45.516 "read": true, 00:05:45.516 "write": true, 00:05:45.516 "unmap": true, 00:05:45.516 "flush": true, 00:05:45.517 "reset": true, 00:05:45.517 "nvme_admin": false, 00:05:45.517 "nvme_io": false, 00:05:45.517 "nvme_io_md": false, 00:05:45.517 "write_zeroes": true, 00:05:45.517 "zcopy": true, 00:05:45.517 "get_zone_info": false, 00:05:45.517 "zone_management": false, 00:05:45.517 "zone_append": false, 00:05:45.517 "compare": false, 00:05:45.517 "compare_and_write": false, 00:05:45.517 "abort": true, 00:05:45.517 "seek_hole": false, 00:05:45.517 "seek_data": false, 00:05:45.517 "copy": true, 00:05:45.517 "nvme_iov_md": false 00:05:45.517 }, 00:05:45.517 "memory_domains": [ 00:05:45.517 { 00:05:45.517 "dma_device_id": "system", 00:05:45.517 "dma_device_type": 1 00:05:45.517 }, 00:05:45.517 { 00:05:45.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.517 "dma_device_type": 2 00:05:45.517 } 00:05:45.517 ], 00:05:45.517 "driver_specific": {} 00:05:45.517 } 00:05:45.517 ]' 00:05:45.517 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:05:45.777 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:05:45.777 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:05:45.777 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:05:45.777 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:05:45.777 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:05:45.777 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:05:45.777 11:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:05:46.345 11:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:05:46.345 11:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:05:46.345 11:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:05:46.345 11:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:05:46.345 11:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:05:48.888 11:10:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:05:48.888 11:10:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:05:48.888 11:10:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:05:48.888 11:10:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:05:48.888 11:10:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:05:48.888 11:10:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:05:48.888 11:10:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:05:48.888 11:10:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:05:48.888 11:10:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:05:48.888 11:10:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:05:48.888 11:10:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:48.888 11:10:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:48.888 11:10:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:05:48.888 11:10:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:05:48.888 11:10:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:05:48.888 11:10:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:05:48.888 11:10:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:05:48.888 11:10:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:05:49.828 11:10:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:05:50.766 11:10:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:05:50.766 11:10:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:05:50.766 11:10:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:50.766 11:10:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.766 11:10:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:50.766 ************************************ 00:05:50.766 START TEST filesystem_ext4 00:05:50.766 ************************************ 00:05:50.766 11:10:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:05:50.766 11:10:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:05:50.766 11:10:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:05:50.766 11:10:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:05:50.766 11:10:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:05:50.766 11:10:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:05:50.766 11:10:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:05:50.766 11:10:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:05:50.766 11:10:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:05:50.766 11:10:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:05:50.766 11:10:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:05:50.766 mke2fs 1.46.5 (30-Dec-2021) 00:05:50.766 Discarding device blocks: 0/522240 done 00:05:50.766 Creating filesystem with 522240 1k blocks and 130560 inodes 00:05:50.766 Filesystem UUID: 630d7e20-2708-4190-a1bb-ae8b4739dd31 00:05:50.766 Superblock backups stored on blocks: 00:05:50.766 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:05:50.766 00:05:50.766 Allocating group tables: 0/64 done 00:05:50.766 Writing inode tables: 0/64 done 00:05:51.024 Creating journal (8192 blocks): done 00:05:51.024 Writing superblocks and filesystem accounting information: 0/64 done 00:05:51.024 00:05:51.024 11:10:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:05:51.024 11:10:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:05:51.591 11:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:05:51.591 11:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:05:51.591 11:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:05:51.591 11:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:05:51.591 11:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:05:51.591 11:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:05:51.591 11:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 470501 00:05:51.591 11:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:05:51.591 11:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:05:51.591 11:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:05:51.591 11:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:05:51.591 00:05:51.591 real 0m0.899s 00:05:51.591 user 0m0.023s 00:05:51.591 sys 0m0.052s 00:05:51.591 11:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.591 11:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:05:51.591 ************************************ 00:05:51.591 END TEST filesystem_ext4 00:05:51.591 ************************************ 00:05:51.591 11:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:05:51.592 11:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:05:51.592 11:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:51.592 11:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.592 11:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:51.592 ************************************ 00:05:51.592 START TEST filesystem_btrfs 00:05:51.592 ************************************ 00:05:51.592 11:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:05:51.592 11:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:05:51.592 11:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:05:51.592 11:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:05:51.592 11:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:05:51.592 11:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:05:51.592 11:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:05:51.592 11:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:05:51.592 11:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:05:51.592 11:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:05:51.592 11:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:05:52.159 btrfs-progs v6.6.2 00:05:52.159 See https://btrfs.readthedocs.io for more information. 00:05:52.159 00:05:52.159 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:05:52.159 NOTE: several default settings have changed in version 5.15, please make sure 00:05:52.159 this does not affect your deployments: 00:05:52.159 - DUP for metadata (-m dup) 00:05:52.159 - enabled no-holes (-O no-holes) 00:05:52.159 - enabled free-space-tree (-R free-space-tree) 00:05:52.159 00:05:52.159 Label: (null) 00:05:52.159 UUID: 13d4ac7a-73b9-41c7-a266-aae58d394c0f 00:05:52.159 Node size: 16384 00:05:52.159 Sector size: 4096 00:05:52.159 Filesystem size: 510.00MiB 00:05:52.159 Block group profiles: 00:05:52.159 Data: single 8.00MiB 00:05:52.159 Metadata: DUP 32.00MiB 00:05:52.159 System: DUP 8.00MiB 00:05:52.159 SSD detected: yes 00:05:52.159 Zoned device: no 00:05:52.159 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:05:52.159 Runtime features: free-space-tree 00:05:52.159 Checksum: crc32c 00:05:52.159 Number of devices: 1 00:05:52.159 Devices: 00:05:52.159 ID SIZE PATH 00:05:52.159 1 510.00MiB /dev/nvme0n1p1 00:05:52.159 00:05:52.159 11:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:05:52.159 11:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:05:53.095 11:10:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:05:53.095 11:10:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:05:53.095 11:10:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:05:53.095 11:10:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:05:53.095 11:10:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:05:53.095 11:10:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:05:53.352 11:10:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 470501 00:05:53.353 11:10:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:05:53.353 11:10:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:05:53.353 11:10:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:05:53.353 11:10:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:05:53.353 00:05:53.353 real 0m1.566s 00:05:53.353 user 0m0.024s 00:05:53.353 sys 0m0.113s 00:05:53.353 11:10:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.353 11:10:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:05:53.353 ************************************ 00:05:53.353 END TEST filesystem_btrfs 00:05:53.353 ************************************ 00:05:53.353 11:10:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:05:53.353 11:10:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:05:53.353 11:10:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:53.353 11:10:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.353 11:10:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:53.353 ************************************ 00:05:53.353 START TEST filesystem_xfs 00:05:53.353 ************************************ 00:05:53.353 11:10:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:05:53.353 11:10:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:05:53.353 11:10:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:05:53.353 11:10:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:05:53.353 11:10:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:05:53.353 11:10:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:05:53.353 11:10:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:05:53.353 11:10:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:05:53.353 11:10:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:05:53.353 11:10:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:05:53.353 11:10:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:05:53.353 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:05:53.353 = sectsz=512 attr=2, projid32bit=1 00:05:53.353 = crc=1 finobt=1, sparse=1, rmapbt=0 00:05:53.353 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:05:53.353 data = bsize=4096 blocks=130560, imaxpct=25 00:05:53.353 = sunit=0 swidth=0 blks 00:05:53.353 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:05:53.353 log =internal log bsize=4096 blocks=16384, version=2 00:05:53.353 = sectsz=512 sunit=0 blks, lazy-count=1 00:05:53.353 realtime =none extsz=4096 blocks=0, rtextents=0 00:05:54.290 Discarding blocks...Done. 00:05:54.290 11:10:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:05:54.290 11:10:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:05:56.825 11:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:05:56.825 11:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:05:56.825 11:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:05:56.825 11:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:05:56.825 11:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:05:56.825 11:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:05:56.825 11:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 470501 00:05:56.825 11:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:05:56.825 11:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:05:56.825 11:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:05:56.825 11:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:05:56.825 00:05:56.825 real 0m3.464s 00:05:56.825 user 0m0.021s 00:05:56.825 sys 0m0.054s 00:05:56.825 11:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.825 11:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:05:56.825 ************************************ 00:05:56.825 END TEST filesystem_xfs 00:05:56.825 ************************************ 00:05:56.825 11:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:05:56.825 11:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:05:56.825 11:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:05:56.825 11:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:05:56.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:05:56.825 11:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:05:56.825 11:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:05:56.825 11:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:05:56.825 11:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:05:56.825 11:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:05:56.825 11:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:05:56.825 11:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:05:56.825 11:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:05:56.825 11:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.825 11:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:56.825 11:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.825 11:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:05:56.825 11:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 470501 00:05:56.825 11:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 470501 ']' 00:05:56.825 11:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 470501 00:05:56.825 11:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:05:56.825 11:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:56.825 11:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 470501 00:05:56.825 11:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:56.825 11:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:56.825 11:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 470501' 00:05:56.825 killing process with pid 470501 00:05:56.825 11:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 470501 00:05:56.825 11:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 470501 00:05:57.393 11:10:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:05:57.393 00:05:57.393 real 0m12.335s 00:05:57.393 user 0m47.215s 00:05:57.393 sys 0m1.876s 00:05:57.393 11:10:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.393 11:10:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:57.393 ************************************ 00:05:57.393 END TEST nvmf_filesystem_no_in_capsule 00:05:57.393 ************************************ 00:05:57.393 11:10:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:05:57.393 11:10:23 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:05:57.393 11:10:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:57.393 11:10:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.393 11:10:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.393 ************************************ 00:05:57.393 START TEST nvmf_filesystem_in_capsule 00:05:57.393 ************************************ 00:05:57.393 11:10:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:05:57.393 11:10:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:05:57.393 11:10:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:05:57.393 11:10:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:05:57.393 11:10:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:57.393 11:10:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:57.393 11:10:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=472194 00:05:57.393 11:10:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:05:57.393 11:10:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 472194 00:05:57.393 11:10:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 472194 ']' 00:05:57.393 11:10:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.393 11:10:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.393 11:10:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.393 11:10:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.393 11:10:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:57.393 [2024-07-12 11:10:23.497774] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:05:57.393 [2024-07-12 11:10:23.497855] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:57.651 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.651 [2024-07-12 11:10:23.561030] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:57.651 [2024-07-12 11:10:23.660964] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:57.651 [2024-07-12 11:10:23.661019] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:57.651 [2024-07-12 11:10:23.661046] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:57.651 [2024-07-12 11:10:23.661058] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:57.651 [2024-07-12 11:10:23.661067] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:57.651 [2024-07-12 11:10:23.661145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.651 [2024-07-12 11:10:23.661210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.651 [2024-07-12 11:10:23.661277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:57.651 [2024-07-12 11:10:23.661280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.909 11:10:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.909 11:10:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:05:57.909 11:10:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:05:57.909 11:10:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:57.909 11:10:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:57.909 11:10:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:57.909 11:10:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:05:57.909 11:10:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:05:57.909 11:10:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.909 11:10:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:57.909 [2024-07-12 11:10:23.820724] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:57.909 11:10:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.909 11:10:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:05:57.909 11:10:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.909 11:10:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:57.909 Malloc1 00:05:57.909 11:10:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.909 11:10:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:05:57.909 11:10:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.909 11:10:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:57.909 11:10:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.909 11:10:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:05:57.909 11:10:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.909 11:10:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:57.909 11:10:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.909 11:10:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:57.909 11:10:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.909 11:10:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:57.909 [2024-07-12 11:10:24.007174] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:57.909 11:10:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.909 11:10:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:05:57.909 11:10:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:05:57.909 11:10:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:05:57.909 11:10:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:05:57.909 11:10:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:05:57.909 11:10:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:05:57.909 11:10:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.909 11:10:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:57.909 11:10:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.909 11:10:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:05:57.909 { 00:05:57.909 "name": "Malloc1", 00:05:57.909 "aliases": [ 00:05:57.909 "782dcd60-c16d-4bb6-bd85-3f2a6473feee" 00:05:57.909 ], 00:05:57.909 "product_name": "Malloc disk", 00:05:57.909 "block_size": 512, 00:05:57.909 "num_blocks": 1048576, 00:05:57.909 "uuid": "782dcd60-c16d-4bb6-bd85-3f2a6473feee", 00:05:57.909 "assigned_rate_limits": { 00:05:57.909 "rw_ios_per_sec": 0, 00:05:57.909 "rw_mbytes_per_sec": 0, 00:05:57.909 "r_mbytes_per_sec": 0, 00:05:57.909 "w_mbytes_per_sec": 0 00:05:57.909 }, 00:05:57.909 "claimed": true, 00:05:57.909 "claim_type": "exclusive_write", 00:05:57.909 "zoned": false, 00:05:57.909 "supported_io_types": { 00:05:57.909 "read": true, 00:05:57.909 "write": true, 00:05:57.909 "unmap": true, 00:05:57.909 "flush": true, 00:05:57.909 "reset": true, 00:05:57.909 "nvme_admin": false, 00:05:57.909 "nvme_io": false, 00:05:57.909 "nvme_io_md": false, 00:05:57.909 "write_zeroes": true, 00:05:57.909 "zcopy": true, 00:05:57.909 "get_zone_info": false, 00:05:57.909 "zone_management": false, 00:05:57.909 "zone_append": false, 00:05:57.909 "compare": false, 00:05:57.909 "compare_and_write": false, 00:05:57.909 "abort": true, 00:05:57.909 "seek_hole": false, 00:05:57.909 "seek_data": false, 00:05:57.909 "copy": true, 00:05:57.909 "nvme_iov_md": false 00:05:57.909 }, 00:05:57.909 "memory_domains": [ 00:05:57.909 { 00:05:57.909 "dma_device_id": "system", 00:05:57.909 "dma_device_type": 1 00:05:57.909 }, 00:05:57.909 { 00:05:57.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:57.909 "dma_device_type": 2 00:05:57.909 } 00:05:57.909 ], 00:05:57.909 "driver_specific": {} 00:05:57.909 } 00:05:57.909 ]' 00:05:57.909 11:10:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:05:58.167 11:10:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:05:58.167 11:10:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:05:58.167 11:10:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:05:58.167 11:10:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:05:58.167 11:10:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:05:58.167 11:10:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:05:58.167 11:10:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:05:58.736 11:10:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:05:58.736 11:10:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:05:58.736 11:10:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:05:58.736 11:10:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:05:58.736 11:10:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:00.645 11:10:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:00.646 11:10:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:00.646 11:10:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:00.646 11:10:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:00.646 11:10:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:00.646 11:10:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:00.646 11:10:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:00.646 11:10:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:00.646 11:10:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:00.646 11:10:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:00.646 11:10:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:00.646 11:10:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:00.646 11:10:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:00.646 11:10:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:00.646 11:10:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:00.646 11:10:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:00.646 11:10:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:00.903 11:10:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:01.838 11:10:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:02.776 11:10:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:02.776 11:10:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:02.776 11:10:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:02.776 11:10:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.776 11:10:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:02.776 ************************************ 00:06:02.776 START TEST filesystem_in_capsule_ext4 00:06:02.776 ************************************ 00:06:02.776 11:10:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:02.776 11:10:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:02.776 11:10:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:02.776 11:10:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:02.776 11:10:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:02.776 11:10:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:02.776 11:10:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:02.776 11:10:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:02.776 11:10:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:02.776 11:10:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:02.776 11:10:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:02.776 mke2fs 1.46.5 (30-Dec-2021) 00:06:02.776 Discarding device blocks: 0/522240 done 00:06:02.776 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:02.776 Filesystem UUID: 005edfc0-4b94-40b7-b9ad-d2e649128cf5 00:06:02.776 Superblock backups stored on blocks: 00:06:02.776 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:02.776 00:06:02.776 Allocating group tables: 0/64 done 00:06:02.776 Writing inode tables: 0/64 done 00:06:05.379 Creating journal (8192 blocks): done 00:06:06.465 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:06:06.465 00:06:06.465 11:10:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:06.465 11:10:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:07.033 11:10:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:07.033 11:10:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:06:07.033 11:10:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:07.033 11:10:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:06:07.033 11:10:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:07.033 11:10:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:07.033 11:10:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 472194 00:06:07.033 11:10:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:07.033 11:10:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:07.033 11:10:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:07.033 11:10:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:07.033 00:06:07.033 real 0m4.334s 00:06:07.033 user 0m0.021s 00:06:07.033 sys 0m0.058s 00:06:07.033 11:10:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.033 11:10:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:07.033 ************************************ 00:06:07.033 END TEST filesystem_in_capsule_ext4 00:06:07.033 ************************************ 00:06:07.033 11:10:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:07.033 11:10:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:07.033 11:10:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:07.033 11:10:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.033 11:10:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:07.033 ************************************ 00:06:07.033 START TEST filesystem_in_capsule_btrfs 00:06:07.033 ************************************ 00:06:07.033 11:10:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:07.033 11:10:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:07.033 11:10:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:07.033 11:10:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:07.033 11:10:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:07.033 11:10:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:07.033 11:10:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:07.033 11:10:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:07.033 11:10:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:07.033 11:10:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:07.033 11:10:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:07.293 btrfs-progs v6.6.2 00:06:07.293 See https://btrfs.readthedocs.io for more information. 00:06:07.293 00:06:07.293 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:07.293 NOTE: several default settings have changed in version 5.15, please make sure 00:06:07.293 this does not affect your deployments: 00:06:07.293 - DUP for metadata (-m dup) 00:06:07.293 - enabled no-holes (-O no-holes) 00:06:07.293 - enabled free-space-tree (-R free-space-tree) 00:06:07.293 00:06:07.293 Label: (null) 00:06:07.293 UUID: 401c236d-e388-4403-b289-d0da65640d0f 00:06:07.293 Node size: 16384 00:06:07.293 Sector size: 4096 00:06:07.293 Filesystem size: 510.00MiB 00:06:07.293 Block group profiles: 00:06:07.293 Data: single 8.00MiB 00:06:07.293 Metadata: DUP 32.00MiB 00:06:07.293 System: DUP 8.00MiB 00:06:07.293 SSD detected: yes 00:06:07.293 Zoned device: no 00:06:07.293 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:07.293 Runtime features: free-space-tree 00:06:07.293 Checksum: crc32c 00:06:07.293 Number of devices: 1 00:06:07.293 Devices: 00:06:07.293 ID SIZE PATH 00:06:07.293 1 510.00MiB /dev/nvme0n1p1 00:06:07.293 00:06:07.293 11:10:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:07.293 11:10:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:08.230 11:10:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:08.230 11:10:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:06:08.230 11:10:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:08.230 11:10:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:06:08.230 11:10:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:08.230 11:10:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:08.230 11:10:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 472194 00:06:08.230 11:10:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:08.230 11:10:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:08.230 11:10:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:08.230 11:10:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:08.230 00:06:08.230 real 0m1.199s 00:06:08.230 user 0m0.016s 00:06:08.230 sys 0m0.120s 00:06:08.230 11:10:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.230 11:10:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:08.230 ************************************ 00:06:08.230 END TEST filesystem_in_capsule_btrfs 00:06:08.230 ************************************ 00:06:08.230 11:10:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:08.230 11:10:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:08.230 11:10:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:08.230 11:10:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.230 11:10:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:08.230 ************************************ 00:06:08.230 START TEST filesystem_in_capsule_xfs 00:06:08.230 ************************************ 00:06:08.230 11:10:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:08.230 11:10:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:08.230 11:10:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:08.230 11:10:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:08.230 11:10:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:08.230 11:10:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:08.230 11:10:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:08.230 11:10:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:06:08.230 11:10:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:08.230 11:10:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:08.230 11:10:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:08.489 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:08.489 = sectsz=512 attr=2, projid32bit=1 00:06:08.489 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:08.489 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:08.489 data = bsize=4096 blocks=130560, imaxpct=25 00:06:08.489 = sunit=0 swidth=0 blks 00:06:08.489 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:08.489 log =internal log bsize=4096 blocks=16384, version=2 00:06:08.489 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:08.489 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:09.056 Discarding blocks...Done. 00:06:09.056 11:10:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:09.056 11:10:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:10.957 11:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:10.957 11:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:06:10.957 11:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:10.957 11:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:06:10.957 11:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:06:10.957 11:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:10.957 11:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 472194 00:06:10.957 11:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:10.957 11:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:10.957 11:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:10.957 11:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:10.957 00:06:10.957 real 0m2.690s 00:06:10.957 user 0m0.019s 00:06:10.957 sys 0m0.054s 00:06:10.957 11:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.957 11:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:10.957 ************************************ 00:06:10.957 END TEST filesystem_in_capsule_xfs 00:06:10.957 ************************************ 00:06:10.957 11:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:10.957 11:10:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:10.957 11:10:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:10.957 11:10:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:11.215 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:11.215 11:10:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:11.215 11:10:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:11.215 11:10:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:11.215 11:10:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:11.215 11:10:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:11.215 11:10:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:11.215 11:10:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:11.215 11:10:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:11.215 11:10:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.215 11:10:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:11.215 11:10:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.215 11:10:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:11.215 11:10:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 472194 00:06:11.215 11:10:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 472194 ']' 00:06:11.215 11:10:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 472194 00:06:11.215 11:10:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:11.215 11:10:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:11.215 11:10:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 472194 00:06:11.215 11:10:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:11.215 11:10:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:11.215 11:10:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 472194' 00:06:11.215 killing process with pid 472194 00:06:11.215 11:10:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 472194 00:06:11.215 11:10:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 472194 00:06:11.781 11:10:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:11.781 00:06:11.781 real 0m14.235s 00:06:11.781 user 0m54.749s 00:06:11.781 sys 0m1.946s 00:06:11.781 11:10:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.781 11:10:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:11.781 ************************************ 00:06:11.781 END TEST nvmf_filesystem_in_capsule 00:06:11.781 ************************************ 00:06:11.781 11:10:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:11.781 11:10:37 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:06:11.781 11:10:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:11.781 11:10:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:06:11.781 11:10:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:11.781 11:10:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:06:11.781 11:10:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:11.781 11:10:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:11.781 rmmod nvme_tcp 00:06:11.781 rmmod nvme_fabrics 00:06:11.781 rmmod nvme_keyring 00:06:11.781 11:10:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:11.781 11:10:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:06:11.781 11:10:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:06:11.781 11:10:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:11.781 11:10:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:11.781 11:10:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:11.781 11:10:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:11.781 11:10:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:11.781 11:10:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:11.781 11:10:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:11.781 11:10:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:11.781 11:10:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:13.686 11:10:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:13.686 00:06:13.686 real 0m31.216s 00:06:13.686 user 1m42.918s 00:06:13.686 sys 0m5.520s 00:06:13.686 11:10:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.686 11:10:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:13.686 ************************************ 00:06:13.686 END TEST nvmf_filesystem 00:06:13.686 ************************************ 00:06:13.944 11:10:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:13.944 11:10:39 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:13.944 11:10:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:13.944 11:10:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.944 11:10:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:13.944 ************************************ 00:06:13.944 START TEST nvmf_target_discovery 00:06:13.945 ************************************ 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:13.945 * Looking for test storage... 00:06:13.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:06:13.945 11:10:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:16.478 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:16.478 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:06:16.478 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:16.478 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:16.478 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:16.478 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:16.478 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:16.478 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:06:16.478 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:16.478 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:06:16.478 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:06:16.478 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:06:16.478 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:06:16.478 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:06:16.478 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:06:16.478 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:16.478 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:16.478 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:16.478 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:16.478 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:16.479 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:16.479 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:16.479 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:16.479 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:16.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:16.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:06:16.479 00:06:16.479 --- 10.0.0.2 ping statistics --- 00:06:16.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:16.479 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:06:16.479 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:16.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:16.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:06:16.480 00:06:16.480 --- 10.0.0.1 ping statistics --- 00:06:16.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:16.480 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=475984 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 475984 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 475984 ']' 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:16.480 [2024-07-12 11:10:42.224274] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:06:16.480 [2024-07-12 11:10:42.224369] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:16.480 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.480 [2024-07-12 11:10:42.288568] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:16.480 [2024-07-12 11:10:42.389626] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:16.480 [2024-07-12 11:10:42.389680] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:16.480 [2024-07-12 11:10:42.389708] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:16.480 [2024-07-12 11:10:42.389719] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:16.480 [2024-07-12 11:10:42.389729] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:16.480 [2024-07-12 11:10:42.389883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.480 [2024-07-12 11:10:42.389959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:16.480 [2024-07-12 11:10:42.389937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.480 [2024-07-12 11:10:42.389962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:16.480 [2024-07-12 11:10:42.544806] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:16.480 Null1 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:16.480 [2024-07-12 11:10:42.585149] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:16.480 Null2 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.480 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:16.481 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.481 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:16.481 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.481 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:16.481 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.481 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:16.739 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:16.740 Null3 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:16.740 Null4 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:06:16.740 00:06:16.740 Discovery Log Number of Records 6, Generation counter 6 00:06:16.740 =====Discovery Log Entry 0====== 00:06:16.740 trtype: tcp 00:06:16.740 adrfam: ipv4 00:06:16.740 subtype: current discovery subsystem 00:06:16.740 treq: not required 00:06:16.740 portid: 0 00:06:16.740 trsvcid: 4420 00:06:16.740 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:16.740 traddr: 10.0.0.2 00:06:16.740 eflags: explicit discovery connections, duplicate discovery information 00:06:16.740 sectype: none 00:06:16.740 =====Discovery Log Entry 1====== 00:06:16.740 trtype: tcp 00:06:16.740 adrfam: ipv4 00:06:16.740 subtype: nvme subsystem 00:06:16.740 treq: not required 00:06:16.740 portid: 0 00:06:16.740 trsvcid: 4420 00:06:16.740 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:16.740 traddr: 10.0.0.2 00:06:16.740 eflags: none 00:06:16.740 sectype: none 00:06:16.740 =====Discovery Log Entry 2====== 00:06:16.740 trtype: tcp 00:06:16.740 adrfam: ipv4 00:06:16.740 subtype: nvme subsystem 00:06:16.740 treq: not required 00:06:16.740 portid: 0 00:06:16.740 trsvcid: 4420 00:06:16.740 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:16.740 traddr: 10.0.0.2 00:06:16.740 eflags: none 00:06:16.740 sectype: none 00:06:16.740 =====Discovery Log Entry 3====== 00:06:16.740 trtype: tcp 00:06:16.740 adrfam: ipv4 00:06:16.740 subtype: nvme subsystem 00:06:16.740 treq: not required 00:06:16.740 portid: 0 00:06:16.740 trsvcid: 4420 00:06:16.740 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:16.740 traddr: 10.0.0.2 00:06:16.740 eflags: none 00:06:16.740 sectype: none 00:06:16.740 =====Discovery Log Entry 4====== 00:06:16.740 trtype: tcp 00:06:16.740 adrfam: ipv4 00:06:16.740 subtype: nvme subsystem 00:06:16.740 treq: not required 00:06:16.740 portid: 0 00:06:16.740 trsvcid: 4420 00:06:16.740 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:16.740 traddr: 10.0.0.2 00:06:16.740 eflags: none 00:06:16.740 sectype: none 00:06:16.740 =====Discovery Log Entry 5====== 00:06:16.740 trtype: tcp 00:06:16.740 adrfam: ipv4 00:06:16.740 subtype: discovery subsystem referral 00:06:16.740 treq: not required 00:06:16.740 portid: 0 00:06:16.740 trsvcid: 4430 00:06:16.740 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:16.740 traddr: 10.0.0.2 00:06:16.740 eflags: none 00:06:16.740 sectype: none 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:16.740 Perform nvmf subsystem discovery via RPC 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:16.740 [ 00:06:16.740 { 00:06:16.740 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:16.740 "subtype": "Discovery", 00:06:16.740 "listen_addresses": [ 00:06:16.740 { 00:06:16.740 "trtype": "TCP", 00:06:16.740 "adrfam": "IPv4", 00:06:16.740 "traddr": "10.0.0.2", 00:06:16.740 "trsvcid": "4420" 00:06:16.740 } 00:06:16.740 ], 00:06:16.740 "allow_any_host": true, 00:06:16.740 "hosts": [] 00:06:16.740 }, 00:06:16.740 { 00:06:16.740 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:16.740 "subtype": "NVMe", 00:06:16.740 "listen_addresses": [ 00:06:16.740 { 00:06:16.740 "trtype": "TCP", 00:06:16.740 "adrfam": "IPv4", 00:06:16.740 "traddr": "10.0.0.2", 00:06:16.740 "trsvcid": "4420" 00:06:16.740 } 00:06:16.740 ], 00:06:16.740 "allow_any_host": true, 00:06:16.740 "hosts": [], 00:06:16.740 "serial_number": "SPDK00000000000001", 00:06:16.740 "model_number": "SPDK bdev Controller", 00:06:16.740 "max_namespaces": 32, 00:06:16.740 "min_cntlid": 1, 00:06:16.740 "max_cntlid": 65519, 00:06:16.740 "namespaces": [ 00:06:16.740 { 00:06:16.740 "nsid": 1, 00:06:16.740 "bdev_name": "Null1", 00:06:16.740 "name": "Null1", 00:06:16.740 "nguid": "8EDB690B643746ADB152DDF6FAE558BB", 00:06:16.740 "uuid": "8edb690b-6437-46ad-b152-ddf6fae558bb" 00:06:16.740 } 00:06:16.740 ] 00:06:16.740 }, 00:06:16.740 { 00:06:16.740 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:16.740 "subtype": "NVMe", 00:06:16.740 "listen_addresses": [ 00:06:16.740 { 00:06:16.740 "trtype": "TCP", 00:06:16.740 "adrfam": "IPv4", 00:06:16.740 "traddr": "10.0.0.2", 00:06:16.740 "trsvcid": "4420" 00:06:16.740 } 00:06:16.740 ], 00:06:16.740 "allow_any_host": true, 00:06:16.740 "hosts": [], 00:06:16.740 "serial_number": "SPDK00000000000002", 00:06:16.740 "model_number": "SPDK bdev Controller", 00:06:16.740 "max_namespaces": 32, 00:06:16.740 "min_cntlid": 1, 00:06:16.740 "max_cntlid": 65519, 00:06:16.740 "namespaces": [ 00:06:16.740 { 00:06:16.740 "nsid": 1, 00:06:16.740 "bdev_name": "Null2", 00:06:16.740 "name": "Null2", 00:06:16.740 "nguid": "9325B198BACC424AACEB1A9868A18E9F", 00:06:16.740 "uuid": "9325b198-bacc-424a-aceb-1a9868a18e9f" 00:06:16.740 } 00:06:16.740 ] 00:06:16.740 }, 00:06:16.740 { 00:06:16.740 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:16.740 "subtype": "NVMe", 00:06:16.740 "listen_addresses": [ 00:06:16.740 { 00:06:16.740 "trtype": "TCP", 00:06:16.740 "adrfam": "IPv4", 00:06:16.740 "traddr": "10.0.0.2", 00:06:16.740 "trsvcid": "4420" 00:06:16.740 } 00:06:16.740 ], 00:06:16.740 "allow_any_host": true, 00:06:16.740 "hosts": [], 00:06:16.740 "serial_number": "SPDK00000000000003", 00:06:16.740 "model_number": "SPDK bdev Controller", 00:06:16.740 "max_namespaces": 32, 00:06:16.740 "min_cntlid": 1, 00:06:16.740 "max_cntlid": 65519, 00:06:16.740 "namespaces": [ 00:06:16.740 { 00:06:16.740 "nsid": 1, 00:06:16.740 "bdev_name": "Null3", 00:06:16.740 "name": "Null3", 00:06:16.740 "nguid": "E52C44CA6F374AE99A7A77A384009018", 00:06:16.740 "uuid": "e52c44ca-6f37-4ae9-9a7a-77a384009018" 00:06:16.740 } 00:06:16.740 ] 00:06:16.740 }, 00:06:16.740 { 00:06:16.740 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:16.740 "subtype": "NVMe", 00:06:16.740 "listen_addresses": [ 00:06:16.740 { 00:06:16.740 "trtype": "TCP", 00:06:16.740 "adrfam": "IPv4", 00:06:16.740 "traddr": "10.0.0.2", 00:06:16.740 "trsvcid": "4420" 00:06:16.740 } 00:06:16.740 ], 00:06:16.740 "allow_any_host": true, 00:06:16.740 "hosts": [], 00:06:16.740 "serial_number": "SPDK00000000000004", 00:06:16.740 "model_number": "SPDK bdev Controller", 00:06:16.740 "max_namespaces": 32, 00:06:16.740 "min_cntlid": 1, 00:06:16.740 "max_cntlid": 65519, 00:06:16.740 "namespaces": [ 00:06:16.740 { 00:06:16.740 "nsid": 1, 00:06:16.740 "bdev_name": "Null4", 00:06:16.740 "name": "Null4", 00:06:16.740 "nguid": "592CCB8191C14C9CBF846A947033819D", 00:06:16.740 "uuid": "592ccb81-91c1-4c9c-bf84-6a947033819d" 00:06:16.740 } 00:06:16.740 ] 00:06:16.740 } 00:06:16.740 ] 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.740 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:16.999 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.999 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:16.999 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:16.999 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.999 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:16.999 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.999 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:16.999 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.999 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:16.999 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.999 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:16.999 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:16.999 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.999 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:16.999 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.999 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:16.999 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.999 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:16.999 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.999 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:06:16.999 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.999 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:16.999 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.999 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:16.999 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:16.999 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.999 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:16.999 11:10:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.999 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:06:16.999 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:16.999 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:16.999 11:10:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:06:16.999 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:16.999 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:06:16.999 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:16.999 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:06:16.999 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:16.999 11:10:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:16.999 rmmod nvme_tcp 00:06:16.999 rmmod nvme_fabrics 00:06:16.999 rmmod nvme_keyring 00:06:16.999 11:10:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:16.999 11:10:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:06:16.999 11:10:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:06:16.999 11:10:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 475984 ']' 00:06:16.999 11:10:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 475984 00:06:16.999 11:10:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 475984 ']' 00:06:16.999 11:10:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 475984 00:06:16.999 11:10:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:06:16.999 11:10:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:16.999 11:10:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 475984 00:06:16.999 11:10:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:16.999 11:10:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:16.999 11:10:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 475984' 00:06:16.999 killing process with pid 475984 00:06:16.999 11:10:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 475984 00:06:16.999 11:10:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 475984 00:06:17.258 11:10:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:17.259 11:10:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:17.259 11:10:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:17.259 11:10:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:17.259 11:10:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:17.259 11:10:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:17.259 11:10:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:17.259 11:10:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:19.799 11:10:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:19.799 00:06:19.800 real 0m5.517s 00:06:19.800 user 0m4.460s 00:06:19.800 sys 0m1.865s 00:06:19.800 11:10:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.800 11:10:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:19.800 ************************************ 00:06:19.800 END TEST nvmf_target_discovery 00:06:19.800 ************************************ 00:06:19.800 11:10:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:19.800 11:10:45 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:19.800 11:10:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:19.800 11:10:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.800 11:10:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:19.800 ************************************ 00:06:19.800 START TEST nvmf_referrals 00:06:19.800 ************************************ 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:19.800 * Looking for test storage... 00:06:19.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:06:19.800 11:10:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:21.707 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:21.707 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:06:21.707 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:21.707 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:21.707 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:21.707 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:21.707 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:21.707 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:06:21.707 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:21.707 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:06:21.707 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:06:21.707 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:06:21.707 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:06:21.707 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:06:21.707 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:06:21.707 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:21.707 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:21.707 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:21.707 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:21.707 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:21.707 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:21.707 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:21.707 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:21.707 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:21.707 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:21.707 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:21.707 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:21.707 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:21.707 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:21.707 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:21.707 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:21.707 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:21.707 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:21.707 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:21.707 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:21.707 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:21.708 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:21.708 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:21.708 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:21.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:21.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:06:21.708 00:06:21.708 --- 10.0.0.2 ping statistics --- 00:06:21.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:21.708 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:21.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:21.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:06:21.708 00:06:21.708 --- 10.0.0.1 ping statistics --- 00:06:21.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:21.708 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=478038 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 478038 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 478038 ']' 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.708 11:10:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:21.708 [2024-07-12 11:10:47.797619] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:06:21.708 [2024-07-12 11:10:47.797698] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:21.708 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.966 [2024-07-12 11:10:47.865385] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:21.966 [2024-07-12 11:10:47.973391] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:21.966 [2024-07-12 11:10:47.973462] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:21.966 [2024-07-12 11:10:47.973474] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:21.966 [2024-07-12 11:10:47.973484] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:21.966 [2024-07-12 11:10:47.973507] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:21.966 [2024-07-12 11:10:47.973615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.966 [2024-07-12 11:10:47.973723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.966 [2024-07-12 11:10:47.973845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:21.966 [2024-07-12 11:10:47.973848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:22.225 [2024-07-12 11:10:48.124731] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:22.225 [2024-07-12 11:10:48.136961] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:22.225 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:22.485 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:22.485 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:22.485 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:06:22.485 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.485 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:22.485 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.485 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:06:22.485 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.485 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:22.485 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.485 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:06:22.485 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.485 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:22.485 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.485 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:22.485 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:06:22.485 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.485 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:22.485 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.485 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:06:22.485 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:06:22.485 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:22.485 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:22.485 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:22.485 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:22.485 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:22.744 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:22.744 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:06:22.744 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:06:22.744 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.744 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:22.744 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.744 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:22.744 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.744 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:22.744 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.744 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:06:22.744 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:22.744 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:22.744 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:22.744 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.744 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:22.744 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:22.744 11:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.744 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:06:22.744 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:22.744 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:06:22.744 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:22.744 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:22.744 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:22.744 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:22.744 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:22.744 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:06:22.744 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:22.744 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:06:22.744 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:22.744 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:06:22.744 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:22.744 11:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:23.004 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:06:23.004 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:06:23.004 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:06:23.004 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:23.004 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:23.005 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:23.263 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:23.263 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:23.263 11:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.263 11:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:23.263 11:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.263 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:06:23.263 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:23.263 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:23.263 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:23.263 11:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.263 11:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:23.263 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:23.263 11:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.263 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:06:23.263 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:23.263 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:06:23.263 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:23.263 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:23.263 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:23.263 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:23.263 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:23.263 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:06:23.263 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:23.263 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:06:23.263 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:23.263 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:06:23.263 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:23.263 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:23.521 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:06:23.521 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:06:23.521 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:23.521 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:06:23.521 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:23.521 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:23.521 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:23.521 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:06:23.521 11:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.521 11:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:23.521 11:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.521 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:23.521 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:06:23.521 11:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.521 11:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:23.521 11:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.521 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:06:23.521 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:06:23.521 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:23.521 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:23.521 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:23.521 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:23.521 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:23.781 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:23.781 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:06:23.781 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:06:23.781 11:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:06:23.781 11:10:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:23.781 11:10:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:06:23.781 11:10:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:23.781 11:10:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:06:23.781 11:10:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:23.781 11:10:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:23.781 rmmod nvme_tcp 00:06:23.781 rmmod nvme_fabrics 00:06:23.781 rmmod nvme_keyring 00:06:23.781 11:10:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:23.781 11:10:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:06:23.781 11:10:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:06:23.781 11:10:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 478038 ']' 00:06:23.781 11:10:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 478038 00:06:23.781 11:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 478038 ']' 00:06:23.781 11:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 478038 00:06:23.781 11:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:06:23.781 11:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:23.781 11:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 478038 00:06:23.781 11:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:23.781 11:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:23.781 11:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 478038' 00:06:23.781 killing process with pid 478038 00:06:23.781 11:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 478038 00:06:23.781 11:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 478038 00:06:24.040 11:10:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:24.040 11:10:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:24.040 11:10:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:24.040 11:10:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:24.040 11:10:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:24.040 11:10:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:24.040 11:10:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:24.040 11:10:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:26.582 11:10:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:26.582 00:06:26.582 real 0m6.753s 00:06:26.582 user 0m9.848s 00:06:26.582 sys 0m2.204s 00:06:26.582 11:10:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.582 11:10:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:26.582 ************************************ 00:06:26.582 END TEST nvmf_referrals 00:06:26.582 ************************************ 00:06:26.582 11:10:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:26.582 11:10:52 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:26.582 11:10:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:26.582 11:10:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.582 11:10:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:26.582 ************************************ 00:06:26.582 START TEST nvmf_connect_disconnect 00:06:26.582 ************************************ 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:26.582 * Looking for test storage... 00:06:26.582 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:26.582 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:26.583 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:26.583 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:06:26.583 11:10:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:28.483 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:28.483 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:06:28.483 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:28.483 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:28.483 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:28.483 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:28.483 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:28.483 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:06:28.483 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:28.483 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:06:28.483 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:06:28.483 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:06:28.483 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:06:28.483 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:06:28.483 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:06:28.483 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:28.483 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:28.483 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:28.483 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:28.483 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:28.483 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:28.483 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:28.483 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:28.483 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:28.483 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:28.483 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:28.483 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:28.483 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:28.483 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:28.483 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:28.483 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:28.483 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:28.483 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:28.483 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:28.483 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:28.483 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:28.483 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:28.483 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:28.483 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:28.483 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:28.484 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:28.484 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:28.484 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:28.484 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:28.484 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:06:28.484 00:06:28.484 --- 10.0.0.2 ping statistics --- 00:06:28.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:28.484 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:28.484 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:28.484 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:06:28.484 00:06:28.484 --- 10.0.0.1 ping statistics --- 00:06:28.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:28.484 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=480336 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 480336 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 480336 ']' 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.484 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:28.484 [2024-07-12 11:10:54.537884] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:06:28.484 [2024-07-12 11:10:54.537969] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:28.484 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.484 [2024-07-12 11:10:54.602839] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:28.742 [2024-07-12 11:10:54.713734] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:28.742 [2024-07-12 11:10:54.713790] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:28.742 [2024-07-12 11:10:54.713820] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:28.742 [2024-07-12 11:10:54.713833] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:28.742 [2024-07-12 11:10:54.713843] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:28.743 [2024-07-12 11:10:54.713956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.743 [2024-07-12 11:10:54.714006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.743 [2024-07-12 11:10:54.714064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.743 [2024-07-12 11:10:54.714061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:28.743 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.743 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:06:28.743 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:28.743 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:28.743 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:28.743 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:28.743 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:28.743 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.743 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:28.743 [2024-07-12 11:10:54.872814] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:29.003 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.003 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:06:29.003 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.003 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:29.003 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.003 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:06:29.003 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:29.003 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.003 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:29.003 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.003 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:29.003 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.003 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:29.003 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.003 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:29.003 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.003 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:29.003 [2024-07-12 11:10:54.933890] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:29.003 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.003 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:06:29.003 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:06:29.003 11:10:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:06:31.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:34.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:37.386 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:39.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:42.489 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:42.490 11:11:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:06:42.490 11:11:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:06:42.490 11:11:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:42.490 11:11:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:06:42.490 11:11:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:42.490 11:11:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:06:42.490 11:11:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:42.490 11:11:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:42.490 rmmod nvme_tcp 00:06:42.490 rmmod nvme_fabrics 00:06:42.490 rmmod nvme_keyring 00:06:42.490 11:11:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:42.490 11:11:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:06:42.490 11:11:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:06:42.490 11:11:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 480336 ']' 00:06:42.490 11:11:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 480336 00:06:42.490 11:11:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 480336 ']' 00:06:42.490 11:11:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 480336 00:06:42.490 11:11:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:06:42.490 11:11:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:42.490 11:11:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 480336 00:06:42.490 11:11:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:42.490 11:11:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:42.490 11:11:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 480336' 00:06:42.490 killing process with pid 480336 00:06:42.490 11:11:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 480336 00:06:42.490 11:11:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 480336 00:06:43.056 11:11:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:43.056 11:11:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:43.056 11:11:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:43.056 11:11:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:43.056 11:11:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:43.056 11:11:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:43.056 11:11:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:43.056 11:11:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:44.963 11:11:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:44.963 00:06:44.963 real 0m18.709s 00:06:44.963 user 0m56.014s 00:06:44.963 sys 0m3.369s 00:06:44.963 11:11:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.963 11:11:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:44.963 ************************************ 00:06:44.963 END TEST nvmf_connect_disconnect 00:06:44.963 ************************************ 00:06:44.963 11:11:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:44.963 11:11:10 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:06:44.963 11:11:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:44.964 11:11:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.964 11:11:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:44.964 ************************************ 00:06:44.964 START TEST nvmf_multitarget 00:06:44.964 ************************************ 00:06:44.964 11:11:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:06:44.964 * Looking for test storage... 00:06:44.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:06:44.964 11:11:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:47.493 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:47.493 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:47.493 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:47.493 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:47.493 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:47.493 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:06:47.493 00:06:47.493 --- 10.0.0.2 ping statistics --- 00:06:47.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:47.493 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:47.493 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:47.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:06:47.493 00:06:47.493 --- 10.0.0.1 ping statistics --- 00:06:47.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:47.493 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=484090 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 484090 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 484090 ']' 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.493 11:11:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:47.493 [2024-07-12 11:11:13.369030] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:06:47.493 [2024-07-12 11:11:13.369116] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:47.493 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.493 [2024-07-12 11:11:13.433704] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:47.493 [2024-07-12 11:11:13.545778] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:47.493 [2024-07-12 11:11:13.545834] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:47.493 [2024-07-12 11:11:13.545847] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:47.493 [2024-07-12 11:11:13.545858] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:47.493 [2024-07-12 11:11:13.545873] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:47.493 [2024-07-12 11:11:13.545940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.493 [2024-07-12 11:11:13.545998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.493 [2024-07-12 11:11:13.546067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:47.493 [2024-07-12 11:11:13.546069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.751 11:11:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:47.751 11:11:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:06:47.751 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:47.751 11:11:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:47.751 11:11:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:47.751 11:11:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:47.751 11:11:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:06:47.751 11:11:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:06:47.751 11:11:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:06:47.751 11:11:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:06:47.751 11:11:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:06:48.009 "nvmf_tgt_1" 00:06:48.009 11:11:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:06:48.009 "nvmf_tgt_2" 00:06:48.009 11:11:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:06:48.009 11:11:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:06:48.266 11:11:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:06:48.266 11:11:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:06:48.266 true 00:06:48.266 11:11:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:06:48.266 true 00:06:48.266 11:11:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:06:48.266 11:11:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:06:48.525 11:11:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:06:48.525 11:11:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:48.525 11:11:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:06:48.525 11:11:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:48.525 11:11:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:06:48.525 11:11:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:48.525 11:11:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:06:48.525 11:11:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:48.525 11:11:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:48.525 rmmod nvme_tcp 00:06:48.525 rmmod nvme_fabrics 00:06:48.525 rmmod nvme_keyring 00:06:48.525 11:11:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:48.525 11:11:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:06:48.525 11:11:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:06:48.525 11:11:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 484090 ']' 00:06:48.525 11:11:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 484090 00:06:48.525 11:11:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 484090 ']' 00:06:48.525 11:11:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 484090 00:06:48.525 11:11:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:06:48.525 11:11:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:48.525 11:11:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 484090 00:06:48.525 11:11:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:48.525 11:11:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:48.525 11:11:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 484090' 00:06:48.525 killing process with pid 484090 00:06:48.525 11:11:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 484090 00:06:48.525 11:11:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 484090 00:06:48.784 11:11:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:48.784 11:11:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:48.784 11:11:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:48.784 11:11:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:48.784 11:11:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:48.784 11:11:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:48.784 11:11:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:48.784 11:11:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:51.318 11:11:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:51.318 00:06:51.318 real 0m5.890s 00:06:51.318 user 0m6.510s 00:06:51.318 sys 0m2.041s 00:06:51.318 11:11:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.318 11:11:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:51.318 ************************************ 00:06:51.318 END TEST nvmf_multitarget 00:06:51.318 ************************************ 00:06:51.318 11:11:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:51.318 11:11:16 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:06:51.318 11:11:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:51.318 11:11:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.318 11:11:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:51.318 ************************************ 00:06:51.318 START TEST nvmf_rpc 00:06:51.318 ************************************ 00:06:51.318 11:11:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:06:51.318 * Looking for test storage... 00:06:51.318 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:51.318 11:11:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:51.318 11:11:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:06:51.318 11:11:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:51.318 11:11:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:51.318 11:11:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:51.318 11:11:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:51.318 11:11:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:51.318 11:11:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:51.318 11:11:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:51.318 11:11:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:51.318 11:11:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:51.318 11:11:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:51.318 11:11:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:51.318 11:11:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:51.318 11:11:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:51.318 11:11:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:51.318 11:11:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:51.318 11:11:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:51.318 11:11:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:51.318 11:11:16 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:51.318 11:11:16 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:51.318 11:11:16 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:51.318 11:11:16 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.318 11:11:16 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.318 11:11:16 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.318 11:11:16 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:06:51.318 11:11:16 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.318 11:11:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:06:51.318 11:11:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:51.318 11:11:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:51.318 11:11:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:51.318 11:11:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:51.318 11:11:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:51.318 11:11:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:51.318 11:11:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:51.318 11:11:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:51.318 11:11:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:06:51.318 11:11:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:06:51.318 11:11:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:51.318 11:11:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:51.318 11:11:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:51.318 11:11:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:51.318 11:11:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:51.318 11:11:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:51.318 11:11:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:51.318 11:11:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:51.318 11:11:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:51.318 11:11:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:51.318 11:11:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:06:51.318 11:11:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.222 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:53.222 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:06:53.222 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:53.222 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:53.222 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:53.222 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:53.222 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:53.222 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:06:53.222 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:53.222 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:06:53.222 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:06:53.222 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:06:53.222 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:06:53.222 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:06:53.222 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:06:53.222 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:53.222 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:53.222 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:53.222 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:53.222 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:53.222 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:53.222 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:53.222 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:53.223 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:53.223 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:53.223 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:53.223 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:53.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:53.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:06:53.223 00:06:53.223 --- 10.0.0.2 ping statistics --- 00:06:53.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.223 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:53.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:53.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:06:53.223 00:06:53.223 --- 10.0.0.1 ping statistics --- 00:06:53.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.223 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=486189 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 486189 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 486189 ']' 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:53.223 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.223 [2024-07-12 11:11:19.280949] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:06:53.223 [2024-07-12 11:11:19.281046] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:53.223 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.223 [2024-07-12 11:11:19.348000] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:53.481 [2024-07-12 11:11:19.459491] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:53.481 [2024-07-12 11:11:19.459550] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:53.481 [2024-07-12 11:11:19.459563] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:53.481 [2024-07-12 11:11:19.459574] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:53.481 [2024-07-12 11:11:19.459584] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:53.481 [2024-07-12 11:11:19.459638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.481 [2024-07-12 11:11:19.459698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.481 [2024-07-12 11:11:19.459766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:53.481 [2024-07-12 11:11:19.459769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.481 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.481 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:53.481 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:53.481 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:53.481 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.739 11:11:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:53.739 11:11:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:06:53.739 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.739 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.739 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.739 11:11:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:06:53.739 "tick_rate": 2700000000, 00:06:53.739 "poll_groups": [ 00:06:53.739 { 00:06:53.739 "name": "nvmf_tgt_poll_group_000", 00:06:53.739 "admin_qpairs": 0, 00:06:53.739 "io_qpairs": 0, 00:06:53.739 "current_admin_qpairs": 0, 00:06:53.739 "current_io_qpairs": 0, 00:06:53.739 "pending_bdev_io": 0, 00:06:53.739 "completed_nvme_io": 0, 00:06:53.739 "transports": [] 00:06:53.739 }, 00:06:53.739 { 00:06:53.739 "name": "nvmf_tgt_poll_group_001", 00:06:53.739 "admin_qpairs": 0, 00:06:53.739 "io_qpairs": 0, 00:06:53.739 "current_admin_qpairs": 0, 00:06:53.739 "current_io_qpairs": 0, 00:06:53.739 "pending_bdev_io": 0, 00:06:53.739 "completed_nvme_io": 0, 00:06:53.739 "transports": [] 00:06:53.739 }, 00:06:53.739 { 00:06:53.739 "name": "nvmf_tgt_poll_group_002", 00:06:53.739 "admin_qpairs": 0, 00:06:53.739 "io_qpairs": 0, 00:06:53.739 "current_admin_qpairs": 0, 00:06:53.739 "current_io_qpairs": 0, 00:06:53.739 "pending_bdev_io": 0, 00:06:53.739 "completed_nvme_io": 0, 00:06:53.739 "transports": [] 00:06:53.739 }, 00:06:53.739 { 00:06:53.739 "name": "nvmf_tgt_poll_group_003", 00:06:53.739 "admin_qpairs": 0, 00:06:53.739 "io_qpairs": 0, 00:06:53.739 "current_admin_qpairs": 0, 00:06:53.739 "current_io_qpairs": 0, 00:06:53.739 "pending_bdev_io": 0, 00:06:53.739 "completed_nvme_io": 0, 00:06:53.739 "transports": [] 00:06:53.739 } 00:06:53.739 ] 00:06:53.739 }' 00:06:53.739 11:11:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:06:53.739 11:11:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:06:53.739 11:11:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:06:53.739 11:11:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:06:53.739 11:11:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:06:53.739 11:11:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:06:53.739 11:11:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:06:53.739 11:11:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:53.739 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.739 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.739 [2024-07-12 11:11:19.711851] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:53.739 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.739 11:11:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:06:53.739 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.739 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.739 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.739 11:11:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:06:53.739 "tick_rate": 2700000000, 00:06:53.739 "poll_groups": [ 00:06:53.739 { 00:06:53.739 "name": "nvmf_tgt_poll_group_000", 00:06:53.739 "admin_qpairs": 0, 00:06:53.739 "io_qpairs": 0, 00:06:53.739 "current_admin_qpairs": 0, 00:06:53.739 "current_io_qpairs": 0, 00:06:53.739 "pending_bdev_io": 0, 00:06:53.739 "completed_nvme_io": 0, 00:06:53.739 "transports": [ 00:06:53.739 { 00:06:53.739 "trtype": "TCP" 00:06:53.739 } 00:06:53.739 ] 00:06:53.739 }, 00:06:53.739 { 00:06:53.739 "name": "nvmf_tgt_poll_group_001", 00:06:53.739 "admin_qpairs": 0, 00:06:53.739 "io_qpairs": 0, 00:06:53.739 "current_admin_qpairs": 0, 00:06:53.739 "current_io_qpairs": 0, 00:06:53.739 "pending_bdev_io": 0, 00:06:53.739 "completed_nvme_io": 0, 00:06:53.739 "transports": [ 00:06:53.739 { 00:06:53.739 "trtype": "TCP" 00:06:53.739 } 00:06:53.739 ] 00:06:53.739 }, 00:06:53.739 { 00:06:53.739 "name": "nvmf_tgt_poll_group_002", 00:06:53.739 "admin_qpairs": 0, 00:06:53.739 "io_qpairs": 0, 00:06:53.739 "current_admin_qpairs": 0, 00:06:53.739 "current_io_qpairs": 0, 00:06:53.739 "pending_bdev_io": 0, 00:06:53.739 "completed_nvme_io": 0, 00:06:53.739 "transports": [ 00:06:53.739 { 00:06:53.739 "trtype": "TCP" 00:06:53.739 } 00:06:53.739 ] 00:06:53.739 }, 00:06:53.739 { 00:06:53.739 "name": "nvmf_tgt_poll_group_003", 00:06:53.739 "admin_qpairs": 0, 00:06:53.739 "io_qpairs": 0, 00:06:53.739 "current_admin_qpairs": 0, 00:06:53.739 "current_io_qpairs": 0, 00:06:53.739 "pending_bdev_io": 0, 00:06:53.739 "completed_nvme_io": 0, 00:06:53.739 "transports": [ 00:06:53.739 { 00:06:53.739 "trtype": "TCP" 00:06:53.739 } 00:06:53.739 ] 00:06:53.739 } 00:06:53.739 ] 00:06:53.739 }' 00:06:53.739 11:11:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:06:53.739 11:11:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:06:53.739 11:11:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:06:53.739 11:11:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:06:53.739 11:11:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:06:53.739 11:11:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:06:53.739 11:11:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:06:53.739 11:11:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:06:53.740 11:11:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:06:53.740 11:11:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:06:53.740 11:11:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:06:53.740 11:11:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:06:53.740 11:11:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:06:53.740 11:11:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:06:53.740 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.740 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.740 Malloc1 00:06:53.740 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.740 11:11:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:53.740 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.740 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.740 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.740 11:11:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:53.740 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.740 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.740 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.740 11:11:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:06:53.740 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.740 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.740 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.740 11:11:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:53.740 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.740 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.740 [2024-07-12 11:11:19.855725] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:53.740 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.740 11:11:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:06:53.740 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:53.740 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:06:53.740 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:06:53.740 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:53.740 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:06:53.740 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:53.740 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:06:53.740 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:53.740 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:06:53.740 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:06:53.740 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:06:53.997 [2024-07-12 11:11:19.878156] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:06:53.997 Failed to write to /dev/nvme-fabrics: Input/output error 00:06:53.997 could not add new controller: failed to write to nvme-fabrics device 00:06:53.997 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:53.997 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:53.997 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:53.997 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:53.997 11:11:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:53.997 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.997 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.997 11:11:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.997 11:11:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:54.561 11:11:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:06:54.561 11:11:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:06:54.561 11:11:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:54.561 11:11:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:54.561 11:11:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:06:56.453 11:11:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:56.453 11:11:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:56.453 11:11:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:56.453 11:11:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:56.453 11:11:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:56.453 11:11:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:06:56.453 11:11:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:56.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:56.711 11:11:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:56.711 11:11:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:06:56.711 11:11:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:56.711 11:11:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:56.711 11:11:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:56.711 11:11:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:56.711 11:11:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:06:56.711 11:11:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:56.711 11:11:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.711 11:11:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.711 11:11:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.711 11:11:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:56.711 11:11:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:56.711 11:11:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:56.711 11:11:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:06:56.711 11:11:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.711 11:11:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:06:56.711 11:11:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.711 11:11:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:06:56.711 11:11:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.711 11:11:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:06:56.711 11:11:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:06:56.711 11:11:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:56.711 [2024-07-12 11:11:22.651745] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:06:56.711 Failed to write to /dev/nvme-fabrics: Input/output error 00:06:56.711 could not add new controller: failed to write to nvme-fabrics device 00:06:56.711 11:11:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:56.711 11:11:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:56.711 11:11:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:56.711 11:11:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:56.711 11:11:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:06:56.711 11:11:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.711 11:11:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.711 11:11:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.711 11:11:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:57.277 11:11:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:06:57.277 11:11:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:06:57.277 11:11:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:57.277 11:11:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:57.277 11:11:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:06:59.167 11:11:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:59.167 11:11:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:59.167 11:11:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:59.167 11:11:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:59.167 11:11:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:59.167 11:11:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:06:59.167 11:11:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:59.424 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:59.424 11:11:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:59.424 11:11:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:06:59.424 11:11:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:59.424 11:11:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:59.424 11:11:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:59.424 11:11:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:59.424 11:11:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:06:59.424 11:11:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:59.424 11:11:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.424 11:11:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.425 11:11:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.425 11:11:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:06:59.425 11:11:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:06:59.425 11:11:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:06:59.425 11:11:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.425 11:11:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.425 11:11:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.425 11:11:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:59.425 11:11:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.425 11:11:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.425 [2024-07-12 11:11:25.401556] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:59.425 11:11:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.425 11:11:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:06:59.425 11:11:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.425 11:11:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.425 11:11:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.425 11:11:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:06:59.425 11:11:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.425 11:11:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.425 11:11:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.425 11:11:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:59.986 11:11:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:06:59.986 11:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:06:59.986 11:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:59.986 11:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:59.986 11:11:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:02.508 11:11:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:02.508 11:11:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:02.508 11:11:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:02.508 11:11:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:02.508 11:11:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:02.508 11:11:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:02.508 11:11:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:02.508 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:02.508 11:11:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:02.508 11:11:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:02.508 11:11:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:02.508 11:11:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:02.508 11:11:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:02.508 11:11:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:02.508 11:11:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:02.508 11:11:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:02.508 11:11:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.508 11:11:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.508 11:11:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.508 11:11:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:02.508 11:11:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.508 11:11:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.508 11:11:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.508 11:11:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:02.508 11:11:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:02.508 11:11:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.508 11:11:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.508 11:11:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.508 11:11:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:02.508 11:11:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.508 11:11:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.508 [2024-07-12 11:11:28.214305] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:02.508 11:11:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.508 11:11:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:02.508 11:11:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.508 11:11:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.508 11:11:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.508 11:11:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:02.508 11:11:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.508 11:11:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.508 11:11:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.508 11:11:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:02.766 11:11:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:02.766 11:11:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:02.766 11:11:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:02.766 11:11:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:02.766 11:11:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:05.348 11:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:05.348 11:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:05.348 11:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:05.348 11:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:05.348 11:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:05.348 11:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:05.348 11:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:05.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:05.348 11:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:05.348 11:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:05.348 11:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:05.348 11:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:05.348 11:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:05.348 11:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:05.348 11:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:05.348 11:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:05.348 11:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.348 11:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.348 11:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.348 11:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:05.348 11:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.348 11:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.348 11:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.348 11:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:05.348 11:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:05.348 11:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.348 11:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.348 11:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.348 11:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:05.348 11:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.348 11:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.348 [2024-07-12 11:11:30.931314] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:05.348 11:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.348 11:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:05.348 11:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.348 11:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.348 11:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.348 11:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:05.348 11:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.348 11:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.348 11:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.348 11:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:05.605 11:11:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:05.605 11:11:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:05.605 11:11:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:05.605 11:11:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:05.605 11:11:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:07.498 11:11:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:07.498 11:11:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:07.498 11:11:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:07.756 11:11:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:07.756 11:11:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:07.756 11:11:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:07.756 11:11:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:07.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:07.756 11:11:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:07.756 11:11:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:07.756 11:11:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:07.756 11:11:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:07.756 11:11:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:07.756 11:11:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:07.756 11:11:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:07.756 11:11:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:07.756 11:11:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.756 11:11:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.756 11:11:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.756 11:11:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:07.756 11:11:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.756 11:11:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.756 11:11:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.756 11:11:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:07.756 11:11:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:07.756 11:11:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.756 11:11:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.756 11:11:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.756 11:11:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:07.756 11:11:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.756 11:11:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.756 [2024-07-12 11:11:33.779108] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:07.756 11:11:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.756 11:11:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:07.756 11:11:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.756 11:11:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.756 11:11:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.757 11:11:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:07.757 11:11:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.757 11:11:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.757 11:11:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.757 11:11:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:08.321 11:11:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:08.321 11:11:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:08.321 11:11:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:08.321 11:11:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:08.321 11:11:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:10.842 11:11:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:10.842 11:11:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:10.842 11:11:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:10.842 11:11:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:10.842 11:11:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:10.842 11:11:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:10.842 11:11:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:10.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:10.842 11:11:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:10.842 11:11:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:10.842 11:11:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:10.842 11:11:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:10.842 11:11:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:10.842 11:11:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:10.842 11:11:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:10.842 11:11:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:10.842 11:11:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.842 11:11:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.842 11:11:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.842 11:11:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:10.842 11:11:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.842 11:11:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.842 11:11:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.842 11:11:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:10.842 11:11:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:10.842 11:11:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.842 11:11:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.842 11:11:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.842 11:11:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:10.842 11:11:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.842 11:11:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.843 [2024-07-12 11:11:36.545308] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:10.843 11:11:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.843 11:11:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:10.843 11:11:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.843 11:11:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.843 11:11:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.843 11:11:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:10.843 11:11:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.843 11:11:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.843 11:11:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.843 11:11:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:11.407 11:11:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:11.407 11:11:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:11.407 11:11:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:11.407 11:11:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:11.407 11:11:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:13.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.303 [2024-07-12 11:11:39.412678] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.303 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.560 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.560 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:13.560 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.560 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.560 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.560 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:13.560 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:13.560 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.560 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.560 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.560 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:13.560 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.560 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.560 [2024-07-12 11:11:39.460760] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.560 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.560 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:13.560 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.560 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.561 [2024-07-12 11:11:39.508933] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.561 [2024-07-12 11:11:39.557097] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.561 [2024-07-12 11:11:39.605250] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:07:13.561 "tick_rate": 2700000000, 00:07:13.561 "poll_groups": [ 00:07:13.561 { 00:07:13.561 "name": "nvmf_tgt_poll_group_000", 00:07:13.561 "admin_qpairs": 2, 00:07:13.561 "io_qpairs": 84, 00:07:13.561 "current_admin_qpairs": 0, 00:07:13.561 "current_io_qpairs": 0, 00:07:13.561 "pending_bdev_io": 0, 00:07:13.561 "completed_nvme_io": 134, 00:07:13.561 "transports": [ 00:07:13.561 { 00:07:13.561 "trtype": "TCP" 00:07:13.561 } 00:07:13.561 ] 00:07:13.561 }, 00:07:13.561 { 00:07:13.561 "name": "nvmf_tgt_poll_group_001", 00:07:13.561 "admin_qpairs": 2, 00:07:13.561 "io_qpairs": 84, 00:07:13.561 "current_admin_qpairs": 0, 00:07:13.561 "current_io_qpairs": 0, 00:07:13.561 "pending_bdev_io": 0, 00:07:13.561 "completed_nvme_io": 184, 00:07:13.561 "transports": [ 00:07:13.561 { 00:07:13.561 "trtype": "TCP" 00:07:13.561 } 00:07:13.561 ] 00:07:13.561 }, 00:07:13.561 { 00:07:13.561 "name": "nvmf_tgt_poll_group_002", 00:07:13.561 "admin_qpairs": 1, 00:07:13.561 "io_qpairs": 84, 00:07:13.561 "current_admin_qpairs": 0, 00:07:13.561 "current_io_qpairs": 0, 00:07:13.561 "pending_bdev_io": 0, 00:07:13.561 "completed_nvme_io": 184, 00:07:13.561 "transports": [ 00:07:13.561 { 00:07:13.561 "trtype": "TCP" 00:07:13.561 } 00:07:13.561 ] 00:07:13.561 }, 00:07:13.561 { 00:07:13.561 "name": "nvmf_tgt_poll_group_003", 00:07:13.561 "admin_qpairs": 2, 00:07:13.561 "io_qpairs": 84, 00:07:13.561 "current_admin_qpairs": 0, 00:07:13.561 "current_io_qpairs": 0, 00:07:13.561 "pending_bdev_io": 0, 00:07:13.561 "completed_nvme_io": 184, 00:07:13.561 "transports": [ 00:07:13.561 { 00:07:13.561 "trtype": "TCP" 00:07:13.561 } 00:07:13.561 ] 00:07:13.561 } 00:07:13.561 ] 00:07:13.561 }' 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:13.561 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:13.819 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:07:13.819 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:07:13.819 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:13.819 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:13.819 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:13.819 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:07:13.819 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:07:13.819 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:07:13.819 11:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:07:13.819 11:11:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:13.819 11:11:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:07:13.819 11:11:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:13.819 11:11:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:07:13.819 11:11:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:13.819 11:11:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:13.819 rmmod nvme_tcp 00:07:13.819 rmmod nvme_fabrics 00:07:13.819 rmmod nvme_keyring 00:07:13.819 11:11:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:13.819 11:11:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:07:13.819 11:11:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:07:13.819 11:11:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 486189 ']' 00:07:13.819 11:11:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 486189 00:07:13.819 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 486189 ']' 00:07:13.819 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 486189 00:07:13.819 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:07:13.819 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:13.819 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 486189 00:07:13.819 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:13.819 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:13.819 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 486189' 00:07:13.819 killing process with pid 486189 00:07:13.819 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 486189 00:07:13.819 11:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 486189 00:07:14.078 11:11:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:14.078 11:11:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:14.078 11:11:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:14.078 11:11:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:14.078 11:11:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:14.078 11:11:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.078 11:11:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:14.078 11:11:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:16.609 11:11:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:16.609 00:07:16.609 real 0m25.248s 00:07:16.609 user 1m21.651s 00:07:16.609 sys 0m4.209s 00:07:16.609 11:11:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.609 11:11:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.609 ************************************ 00:07:16.609 END TEST nvmf_rpc 00:07:16.609 ************************************ 00:07:16.609 11:11:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:16.609 11:11:42 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:16.609 11:11:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:16.609 11:11:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.609 11:11:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:16.609 ************************************ 00:07:16.609 START TEST nvmf_invalid 00:07:16.609 ************************************ 00:07:16.609 11:11:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:16.609 * Looking for test storage... 00:07:16.609 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:16.609 11:11:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:16.609 11:11:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:07:16.609 11:11:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:16.609 11:11:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:16.609 11:11:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:16.609 11:11:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:16.609 11:11:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:16.609 11:11:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:16.609 11:11:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:16.609 11:11:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:16.609 11:11:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:16.609 11:11:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:16.609 11:11:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:16.609 11:11:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:16.609 11:11:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:16.609 11:11:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:16.609 11:11:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:16.609 11:11:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:16.609 11:11:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:16.609 11:11:42 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:16.609 11:11:42 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:16.609 11:11:42 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:16.609 11:11:42 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.609 11:11:42 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.609 11:11:42 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.609 11:11:42 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:07:16.609 11:11:42 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.609 11:11:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:07:16.609 11:11:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:16.609 11:11:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:16.609 11:11:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:16.609 11:11:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:16.609 11:11:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:16.609 11:11:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:16.609 11:11:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:16.609 11:11:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:16.609 11:11:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:16.609 11:11:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:16.609 11:11:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:07:16.610 11:11:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:07:16.610 11:11:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:07:16.610 11:11:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:07:16.610 11:11:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:16.610 11:11:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:16.610 11:11:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:16.610 11:11:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:16.610 11:11:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:16.610 11:11:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:16.610 11:11:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:16.610 11:11:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:16.610 11:11:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:16.610 11:11:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:16.610 11:11:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:07:16.610 11:11:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:18.512 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:18.512 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:18.512 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:18.512 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:18.512 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:18.512 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:07:18.512 00:07:18.512 --- 10.0.0.2 ping statistics --- 00:07:18.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.512 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:18.512 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:18.512 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:07:18.512 00:07:18.512 --- 10.0.0.1 ping statistics --- 00:07:18.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.512 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=490703 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 490703 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 490703 ']' 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:18.512 11:11:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:18.512 [2024-07-12 11:11:44.624926] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:07:18.512 [2024-07-12 11:11:44.624994] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:18.769 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.769 [2024-07-12 11:11:44.686408] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:18.769 [2024-07-12 11:11:44.788919] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:18.769 [2024-07-12 11:11:44.788971] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:18.769 [2024-07-12 11:11:44.788984] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:18.769 [2024-07-12 11:11:44.788995] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:18.769 [2024-07-12 11:11:44.789005] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:18.769 [2024-07-12 11:11:44.789084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.769 [2024-07-12 11:11:44.789147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.769 [2024-07-12 11:11:44.789214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.769 [2024-07-12 11:11:44.789217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.026 11:11:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:19.026 11:11:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:07:19.026 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:19.026 11:11:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:19.026 11:11:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:19.026 11:11:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:19.026 11:11:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:19.026 11:11:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode17541 00:07:19.026 [2024-07-12 11:11:45.157383] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:07:19.283 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:07:19.283 { 00:07:19.283 "nqn": "nqn.2016-06.io.spdk:cnode17541", 00:07:19.283 "tgt_name": "foobar", 00:07:19.283 "method": "nvmf_create_subsystem", 00:07:19.283 "req_id": 1 00:07:19.283 } 00:07:19.283 Got JSON-RPC error response 00:07:19.283 response: 00:07:19.283 { 00:07:19.283 "code": -32603, 00:07:19.283 "message": "Unable to find target foobar" 00:07:19.283 }' 00:07:19.283 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:07:19.283 { 00:07:19.283 "nqn": "nqn.2016-06.io.spdk:cnode17541", 00:07:19.283 "tgt_name": "foobar", 00:07:19.283 "method": "nvmf_create_subsystem", 00:07:19.283 "req_id": 1 00:07:19.283 } 00:07:19.283 Got JSON-RPC error response 00:07:19.283 response: 00:07:19.283 { 00:07:19.283 "code": -32603, 00:07:19.283 "message": "Unable to find target foobar" 00:07:19.283 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:07:19.283 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:07:19.283 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode2567 00:07:19.540 [2024-07-12 11:11:45.450392] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2567: invalid serial number 'SPDKISFASTANDAWESOME' 00:07:19.540 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:07:19.540 { 00:07:19.540 "nqn": "nqn.2016-06.io.spdk:cnode2567", 00:07:19.540 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:19.540 "method": "nvmf_create_subsystem", 00:07:19.540 "req_id": 1 00:07:19.540 } 00:07:19.540 Got JSON-RPC error response 00:07:19.540 response: 00:07:19.540 { 00:07:19.540 "code": -32602, 00:07:19.540 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:19.540 }' 00:07:19.540 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:07:19.540 { 00:07:19.540 "nqn": "nqn.2016-06.io.spdk:cnode2567", 00:07:19.540 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:19.540 "method": "nvmf_create_subsystem", 00:07:19.540 "req_id": 1 00:07:19.540 } 00:07:19.540 Got JSON-RPC error response 00:07:19.540 response: 00:07:19.540 { 00:07:19.540 "code": -32602, 00:07:19.540 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:19.540 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:19.540 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:07:19.540 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode5754 00:07:19.798 [2024-07-12 11:11:45.715257] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5754: invalid model number 'SPDK_Controller' 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:07:19.798 { 00:07:19.798 "nqn": "nqn.2016-06.io.spdk:cnode5754", 00:07:19.798 "model_number": "SPDK_Controller\u001f", 00:07:19.798 "method": "nvmf_create_subsystem", 00:07:19.798 "req_id": 1 00:07:19.798 } 00:07:19.798 Got JSON-RPC error response 00:07:19.798 response: 00:07:19.798 { 00:07:19.798 "code": -32602, 00:07:19.798 "message": "Invalid MN SPDK_Controller\u001f" 00:07:19.798 }' 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:07:19.798 { 00:07:19.798 "nqn": "nqn.2016-06.io.spdk:cnode5754", 00:07:19.798 "model_number": "SPDK_Controller\u001f", 00:07:19.798 "method": "nvmf_create_subsystem", 00:07:19.798 "req_id": 1 00:07:19.798 } 00:07:19.798 Got JSON-RPC error response 00:07:19.798 response: 00:07:19.798 { 00:07:19.798 "code": -32602, 00:07:19.798 "message": "Invalid MN SPDK_Controller\u001f" 00:07:19.798 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ Q == \- ]] 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Q6tZp=,~XX]n!ib3N5&,n' 00:07:19.798 11:11:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'Q6tZp=,~XX]n!ib3N5&,n' nqn.2016-06.io.spdk:cnode1166 00:07:20.057 [2024-07-12 11:11:46.028347] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1166: invalid serial number 'Q6tZp=,~XX]n!ib3N5&,n' 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:07:20.057 { 00:07:20.057 "nqn": "nqn.2016-06.io.spdk:cnode1166", 00:07:20.057 "serial_number": "Q6tZp=,~XX]n!ib3N5&,n", 00:07:20.057 "method": "nvmf_create_subsystem", 00:07:20.057 "req_id": 1 00:07:20.057 } 00:07:20.057 Got JSON-RPC error response 00:07:20.057 response: 00:07:20.057 { 00:07:20.057 "code": -32602, 00:07:20.057 "message": "Invalid SN Q6tZp=,~XX]n!ib3N5&,n" 00:07:20.057 }' 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:07:20.057 { 00:07:20.057 "nqn": "nqn.2016-06.io.spdk:cnode1166", 00:07:20.057 "serial_number": "Q6tZp=,~XX]n!ib3N5&,n", 00:07:20.057 "method": "nvmf_create_subsystem", 00:07:20.057 "req_id": 1 00:07:20.057 } 00:07:20.057 Got JSON-RPC error response 00:07:20.057 response: 00:07:20.057 { 00:07:20.057 "code": -32602, 00:07:20.057 "message": "Invalid SN Q6tZp=,~XX]n!ib3N5&,n" 00:07:20.057 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:07:20.057 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ H == \- ]] 00:07:20.058 11:11:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'HNgBgD{3LBPC.L}K^6?Y/l3Cm /dev/null' 00:07:22.941 11:11:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.476 11:11:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:25.476 00:07:25.476 real 0m8.802s 00:07:25.476 user 0m20.480s 00:07:25.476 sys 0m2.449s 00:07:25.476 11:11:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.476 11:11:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:25.476 ************************************ 00:07:25.476 END TEST nvmf_invalid 00:07:25.476 ************************************ 00:07:25.476 11:11:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:25.476 11:11:51 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:25.476 11:11:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:25.476 11:11:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.476 11:11:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:25.476 ************************************ 00:07:25.476 START TEST nvmf_abort 00:07:25.476 ************************************ 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:25.476 * Looking for test storage... 00:07:25.476 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:07:25.476 11:11:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:27.376 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:27.376 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:27.376 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:27.376 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:27.377 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:27.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:27.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:07:27.377 00:07:27.377 --- 10.0.0.2 ping statistics --- 00:07:27.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:27.377 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:27.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:27.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:07:27.377 00:07:27.377 --- 10.0.0.1 ping statistics --- 00:07:27.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:27.377 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=493341 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 493341 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 493341 ']' 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:27.377 11:11:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:27.377 [2024-07-12 11:11:53.399976] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:07:27.377 [2024-07-12 11:11:53.400058] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:27.377 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.377 [2024-07-12 11:11:53.467670] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:27.635 [2024-07-12 11:11:53.579563] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:27.635 [2024-07-12 11:11:53.579623] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:27.635 [2024-07-12 11:11:53.579637] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:27.635 [2024-07-12 11:11:53.579648] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:27.635 [2024-07-12 11:11:53.579658] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:27.635 [2024-07-12 11:11:53.580901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.635 [2024-07-12 11:11:53.580989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:27.635 [2024-07-12 11:11:53.580992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.635 11:11:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:27.635 11:11:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:07:27.635 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:27.635 11:11:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:27.635 11:11:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:27.635 11:11:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:27.635 11:11:53 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:27.635 11:11:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.635 11:11:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:27.635 [2024-07-12 11:11:53.732424] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.635 11:11:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.635 11:11:53 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:27.635 11:11:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.635 11:11:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:27.893 Malloc0 00:07:27.894 11:11:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.894 11:11:53 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:27.894 11:11:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.894 11:11:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:27.894 Delay0 00:07:27.894 11:11:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.894 11:11:53 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:27.894 11:11:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.894 11:11:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:27.894 11:11:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.894 11:11:53 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:27.894 11:11:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.894 11:11:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:27.894 11:11:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.894 11:11:53 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:27.894 11:11:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.894 11:11:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:27.894 [2024-07-12 11:11:53.808528] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:27.894 11:11:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.894 11:11:53 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:27.894 11:11:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.894 11:11:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:27.894 11:11:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.894 11:11:53 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:27.894 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.894 [2024-07-12 11:11:53.913981] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:30.417 Initializing NVMe Controllers 00:07:30.417 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:30.417 controller IO queue size 128 less than required 00:07:30.417 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:30.417 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:30.417 Initialization complete. Launching workers. 00:07:30.417 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 32548 00:07:30.417 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32609, failed to submit 62 00:07:30.417 success 32552, unsuccess 57, failed 0 00:07:30.417 11:11:56 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:30.417 11:11:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.417 11:11:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.417 11:11:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.417 11:11:56 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:30.417 11:11:56 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:30.417 11:11:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:30.417 11:11:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:30.417 11:11:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:30.417 11:11:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:30.417 11:11:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:30.417 11:11:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:30.417 rmmod nvme_tcp 00:07:30.417 rmmod nvme_fabrics 00:07:30.417 rmmod nvme_keyring 00:07:30.417 11:11:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:30.417 11:11:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:30.417 11:11:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:30.417 11:11:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 493341 ']' 00:07:30.417 11:11:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 493341 00:07:30.417 11:11:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 493341 ']' 00:07:30.417 11:11:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 493341 00:07:30.417 11:11:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:07:30.417 11:11:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:30.417 11:11:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 493341 00:07:30.417 11:11:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:30.417 11:11:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:30.417 11:11:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 493341' 00:07:30.417 killing process with pid 493341 00:07:30.417 11:11:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 493341 00:07:30.417 11:11:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 493341 00:07:30.417 11:11:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:30.417 11:11:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:30.417 11:11:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:30.417 11:11:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:30.417 11:11:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:30.417 11:11:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.417 11:11:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:30.417 11:11:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:32.347 11:11:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:32.347 00:07:32.347 real 0m7.366s 00:07:32.347 user 0m10.565s 00:07:32.347 sys 0m2.634s 00:07:32.347 11:11:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.347 11:11:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:32.347 ************************************ 00:07:32.347 END TEST nvmf_abort 00:07:32.347 ************************************ 00:07:32.608 11:11:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:32.608 11:11:58 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:32.608 11:11:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:32.608 11:11:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.608 11:11:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:32.608 ************************************ 00:07:32.608 START TEST nvmf_ns_hotplug_stress 00:07:32.608 ************************************ 00:07:32.608 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:32.608 * Looking for test storage... 00:07:32.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:32.608 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:32.608 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:32.608 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:32.608 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:32.608 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:32.608 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:32.608 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:32.608 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:32.608 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:32.608 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:32.608 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:32.608 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:32.608 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:32.608 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:32.608 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:32.608 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:32.608 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:32.609 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:32.609 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:32.609 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:32.609 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:32.609 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:32.609 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.609 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.609 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.609 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:32.609 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.609 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:32.609 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:32.609 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:32.609 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:32.609 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:32.609 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:32.609 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:32.609 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:32.609 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:32.609 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:32.609 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:32.609 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:32.609 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:32.609 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:32.609 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:32.609 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:32.609 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:32.609 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:32.609 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:32.609 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:32.609 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:32.609 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:07:32.609 11:11:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:35.138 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:35.139 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:35.139 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:35.139 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:35.139 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:35.139 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:35.139 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:07:35.139 00:07:35.139 --- 10.0.0.2 ping statistics --- 00:07:35.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.139 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:35.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:35.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:07:35.139 00:07:35.139 --- 10.0.0.1 ping statistics --- 00:07:35.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.139 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=495682 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 495682 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 495682 ']' 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:35.139 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.140 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:35.140 11:12:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:35.140 [2024-07-12 11:12:00.876526] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:07:35.140 [2024-07-12 11:12:00.876603] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.140 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.140 [2024-07-12 11:12:00.942993] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:35.140 [2024-07-12 11:12:01.052002] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:35.140 [2024-07-12 11:12:01.052059] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:35.140 [2024-07-12 11:12:01.052078] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:35.140 [2024-07-12 11:12:01.052089] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:35.140 [2024-07-12 11:12:01.052099] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:35.140 [2024-07-12 11:12:01.052172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.140 [2024-07-12 11:12:01.052313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.140 [2024-07-12 11:12:01.052316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.140 11:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:35.140 11:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:07:35.140 11:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:35.140 11:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:35.140 11:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:35.140 11:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:35.140 11:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:35.140 11:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:35.397 [2024-07-12 11:12:01.423132] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:35.397 11:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:35.654 11:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:35.912 [2024-07-12 11:12:01.942020] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:35.912 11:12:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:36.168 11:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:36.426 Malloc0 00:07:36.426 11:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:36.682 Delay0 00:07:36.682 11:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.940 11:12:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:37.197 NULL1 00:07:37.197 11:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:37.454 11:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=496098 00:07:37.454 11:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:37.454 11:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496098 00:07:37.454 11:12:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.454 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.827 Read completed with error (sct=0, sc=11) 00:07:38.827 11:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.827 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.827 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.827 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.827 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.827 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.827 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.827 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.084 11:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:39.084 11:12:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:39.341 true 00:07:39.341 11:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496098 00:07:39.341 11:12:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.905 11:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.471 11:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:40.471 11:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:40.471 true 00:07:40.471 11:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496098 00:07:40.471 11:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.035 11:12:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.035 11:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:41.035 11:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:41.292 true 00:07:41.292 11:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496098 00:07:41.292 11:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.549 11:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.806 11:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:41.806 11:12:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:42.064 true 00:07:42.064 11:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496098 00:07:42.064 11:12:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.994 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.994 11:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.251 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.251 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.251 11:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:43.251 11:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:43.508 true 00:07:43.508 11:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496098 00:07:43.508 11:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.765 11:12:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.022 11:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:44.022 11:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:44.278 true 00:07:44.278 11:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496098 00:07:44.278 11:12:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.209 11:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.466 11:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:45.466 11:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:45.723 true 00:07:45.723 11:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496098 00:07:45.723 11:12:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.980 11:12:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.237 11:12:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:46.237 11:12:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:46.494 true 00:07:46.494 11:12:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496098 00:07:46.494 11:12:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.433 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.433 11:12:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.433 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.433 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.433 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.690 11:12:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:47.690 11:12:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:47.947 true 00:07:47.947 11:12:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496098 00:07:47.947 11:12:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.204 11:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.461 11:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:48.461 11:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:48.461 true 00:07:48.719 11:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496098 00:07:48.719 11:12:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.650 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.650 11:12:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.650 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.907 11:12:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:49.907 11:12:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:50.164 true 00:07:50.164 11:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496098 00:07:50.164 11:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.421 11:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.679 11:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:50.679 11:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:50.936 true 00:07:50.936 11:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496098 00:07:50.936 11:12:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.868 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.868 11:12:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.868 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.868 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.125 11:12:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:52.125 11:12:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:52.382 true 00:07:52.382 11:12:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496098 00:07:52.382 11:12:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.639 11:12:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.895 11:12:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:52.895 11:12:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:53.152 true 00:07:53.152 11:12:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496098 00:07:53.152 11:12:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.084 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:54.084 11:12:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.084 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:54.341 11:12:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:54.341 11:12:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:54.598 true 00:07:54.598 11:12:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496098 00:07:54.598 11:12:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.855 11:12:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.168 11:12:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:55.168 11:12:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:55.451 true 00:07:55.451 11:12:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496098 00:07:55.451 11:12:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.381 11:12:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.381 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.381 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.638 11:12:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:56.638 11:12:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:56.894 true 00:07:56.894 11:12:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496098 00:07:56.894 11:12:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.150 11:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.406 11:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:57.406 11:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:57.662 true 00:07:57.662 11:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496098 00:07:57.662 11:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.919 11:12:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.176 11:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:58.176 11:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:58.432 true 00:07:58.432 11:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496098 00:07:58.432 11:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.364 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:59.364 11:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.364 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:59.621 11:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:59.621 11:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:59.878 true 00:07:59.878 11:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496098 00:07:59.878 11:12:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.134 11:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.390 11:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:00.390 11:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:00.646 true 00:08:00.646 11:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496098 00:08:00.646 11:12:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.575 11:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.832 11:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:01.832 11:12:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:02.089 true 00:08:02.089 11:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496098 00:08:02.089 11:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.346 11:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.604 11:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:02.604 11:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:02.861 true 00:08:02.861 11:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496098 00:08:02.861 11:12:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.792 11:12:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.049 11:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:04.050 11:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:04.307 true 00:08:04.307 11:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496098 00:08:04.307 11:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.564 11:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.821 11:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:04.821 11:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:05.078 true 00:08:05.078 11:12:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496098 00:08:05.078 11:12:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.335 11:12:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.592 11:12:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:05.592 11:12:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:05.849 true 00:08:05.849 11:12:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496098 00:08:05.849 11:12:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.782 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:06.782 11:12:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.782 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:07.038 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:07.038 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:07.038 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:07.038 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:07.038 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:07.038 11:12:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:07.038 11:12:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:07.294 true 00:08:07.294 11:12:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496098 00:08:07.294 11:12:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.226 Initializing NVMe Controllers 00:08:08.226 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:08.226 Controller IO queue size 128, less than required. 00:08:08.226 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:08.226 Controller IO queue size 128, less than required. 00:08:08.226 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:08.226 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:08.226 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:08.226 Initialization complete. Launching workers. 00:08:08.226 ======================================================== 00:08:08.226 Latency(us) 00:08:08.226 Device Information : IOPS MiB/s Average min max 00:08:08.226 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1243.43 0.61 55283.93 2920.61 1013924.46 00:08:08.226 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10999.40 5.37 11637.12 1701.01 537588.32 00:08:08.226 ======================================================== 00:08:08.226 Total : 12242.83 5.98 16070.07 1701.01 1013924.46 00:08:08.226 00:08:08.226 11:12:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.484 11:12:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:08.484 11:12:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:08.741 true 00:08:08.741 11:12:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496098 00:08:08.741 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (496098) - No such process 00:08:08.741 11:12:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 496098 00:08:08.741 11:12:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.999 11:12:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:09.256 11:12:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:09.256 11:12:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:09.256 11:12:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:09.256 11:12:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:09.256 11:12:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:09.256 null0 00:08:09.256 11:12:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:09.256 11:12:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:09.256 11:12:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:09.514 null1 00:08:09.514 11:12:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:09.514 11:12:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:09.514 11:12:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:09.772 null2 00:08:09.772 11:12:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:09.772 11:12:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:09.772 11:12:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:10.030 null3 00:08:10.030 11:12:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:10.030 11:12:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:10.030 11:12:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:10.288 null4 00:08:10.288 11:12:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:10.288 11:12:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:10.288 11:12:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:10.545 null5 00:08:10.545 11:12:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:10.545 11:12:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:10.545 11:12:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:10.803 null6 00:08:10.803 11:12:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:10.803 11:12:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:10.803 11:12:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:11.061 null7 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:11.061 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:11.062 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:11.062 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.062 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:11.062 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:11.062 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:11.062 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:11.062 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:11.062 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:11.062 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:11.062 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.062 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:11.062 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:11.062 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:11.062 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:11.062 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:11.062 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:11.062 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 500655 500656 500658 500660 500662 500664 500666 500668 00:08:11.062 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:11.062 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.062 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:11.321 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.321 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:11.321 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:11.321 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:11.321 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:11.321 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:11.321 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:11.321 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:11.579 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.579 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.579 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:11.579 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.579 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.579 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:11.579 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.579 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.579 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:11.579 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.579 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.579 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:11.579 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.579 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.579 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:11.579 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.579 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.579 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:11.579 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.579 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.579 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:11.579 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.579 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.579 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:11.837 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.837 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:11.837 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:11.837 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:11.837 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:11.837 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:11.837 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:11.837 11:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:12.095 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.095 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.095 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:12.095 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.095 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.095 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:12.095 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.095 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.095 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:12.095 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.095 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.095 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:12.095 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.095 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.095 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:12.095 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.095 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.095 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:12.095 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.095 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.095 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:12.095 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.095 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.095 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:12.353 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.353 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:12.353 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:12.353 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:12.353 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:12.353 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:12.353 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:12.353 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:12.612 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.612 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.612 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:12.612 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.612 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.612 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:12.612 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.612 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.612 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:12.612 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.612 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.612 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:12.612 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.612 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.612 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:12.612 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.612 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.612 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:12.870 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.870 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.870 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.870 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:12.870 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.870 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:12.870 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.870 11:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:13.129 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:13.129 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:13.129 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:13.129 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:13.129 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:13.129 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:13.388 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.388 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.388 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.388 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.388 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:13.388 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:13.388 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.388 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.388 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:13.388 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.388 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.388 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:13.388 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.388 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.388 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:13.388 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.388 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.388 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:13.388 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.388 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.388 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:13.388 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.388 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.388 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:13.647 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.647 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:13.647 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:13.647 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:13.647 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:13.647 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:13.647 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:13.647 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:13.905 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.905 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.905 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:13.905 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.905 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.905 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:13.905 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.905 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.905 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:13.905 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.905 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.905 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:13.905 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.905 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.905 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:13.905 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.905 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.905 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:13.905 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.905 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.905 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:13.905 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.905 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.905 11:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:14.163 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:14.163 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:14.163 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:14.163 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.163 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:14.163 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:14.163 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:14.163 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:14.422 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.422 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.422 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:14.422 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.422 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.422 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:14.422 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.422 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.422 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:14.422 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.422 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.422 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:14.422 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.422 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.422 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:14.422 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.422 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.422 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:14.422 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.422 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.422 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:14.422 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.422 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.422 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:14.680 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:14.680 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:14.680 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:14.680 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:14.680 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:14.680 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:14.680 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.680 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:14.938 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.938 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.938 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:14.938 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.938 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.938 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:14.938 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.938 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.938 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:14.938 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.938 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.938 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:14.938 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.938 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.938 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:14.938 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.938 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.938 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:14.938 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.938 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.938 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:14.938 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.938 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.938 11:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:15.197 11:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:15.197 11:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:15.197 11:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:15.197 11:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:15.197 11:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.197 11:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:15.197 11:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:15.197 11:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:15.455 11:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.455 11:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.455 11:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:15.455 11:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.455 11:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.455 11:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:15.455 11:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.455 11:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.455 11:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:15.455 11:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.455 11:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.455 11:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:15.455 11:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.455 11:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.455 11:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:15.455 11:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.455 11:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.455 11:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.455 11:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:15.455 11:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.455 11:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:15.455 11:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.455 11:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.455 11:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:15.714 11:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:15.714 11:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:15.714 11:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:15.714 11:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:15.714 11:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:15.714 11:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.714 11:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:15.714 11:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:15.972 11:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.972 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.972 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:15.972 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.972 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.972 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:15.972 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.972 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.972 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:15.972 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.972 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.972 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:15.972 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.972 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.972 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:15.972 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.972 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.972 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:15.972 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.972 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.972 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:15.972 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.972 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.972 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:16.230 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:16.230 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:16.230 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:16.230 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:16.230 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.230 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:16.230 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:16.230 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:16.519 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.519 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.519 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.519 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.519 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.519 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.519 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.519 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.519 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.519 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.519 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.519 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.519 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.519 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.519 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.519 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.519 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:16.519 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:16.519 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:16.519 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:16.519 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:16.519 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:16.519 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:16.519 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:16.519 rmmod nvme_tcp 00:08:16.519 rmmod nvme_fabrics 00:08:16.519 rmmod nvme_keyring 00:08:16.810 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:16.810 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:16.810 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:16.810 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 495682 ']' 00:08:16.810 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 495682 00:08:16.810 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 495682 ']' 00:08:16.810 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 495682 00:08:16.810 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:08:16.810 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:16.810 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 495682 00:08:16.810 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:16.810 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:16.810 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 495682' 00:08:16.810 killing process with pid 495682 00:08:16.810 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 495682 00:08:16.810 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 495682 00:08:17.090 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:17.090 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:17.090 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:17.090 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:17.090 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:17.090 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.090 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:17.090 11:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.995 11:12:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:18.995 00:08:18.995 real 0m46.488s 00:08:18.995 user 3m30.985s 00:08:18.995 sys 0m16.506s 00:08:18.995 11:12:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:18.995 11:12:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:18.995 ************************************ 00:08:18.995 END TEST nvmf_ns_hotplug_stress 00:08:18.995 ************************************ 00:08:18.995 11:12:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:18.995 11:12:45 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:18.995 11:12:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:18.995 11:12:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.995 11:12:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:18.995 ************************************ 00:08:18.995 START TEST nvmf_connect_stress 00:08:18.995 ************************************ 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:18.995 * Looking for test storage... 00:08:18.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:18.995 11:12:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:21.526 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:21.526 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:21.526 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:21.526 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:21.526 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:21.526 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:08:21.526 00:08:21.526 --- 10.0.0.2 ping statistics --- 00:08:21.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.526 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:21.526 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:21.526 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:08:21.526 00:08:21.526 --- 10.0.0.1 ping statistics --- 00:08:21.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.526 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:21.526 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:21.527 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:21.527 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:21.527 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:21.527 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:21.527 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:21.527 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:08:21.527 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:21.527 11:12:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:21.527 11:12:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:21.527 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=503414 00:08:21.527 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:21.527 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 503414 00:08:21.527 11:12:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 503414 ']' 00:08:21.527 11:12:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.527 11:12:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:21.527 11:12:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.527 11:12:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:21.527 11:12:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:21.527 [2024-07-12 11:12:47.400348] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:08:21.527 [2024-07-12 11:12:47.400432] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.527 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.527 [2024-07-12 11:12:47.463926] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:21.527 [2024-07-12 11:12:47.574697] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.527 [2024-07-12 11:12:47.574761] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.527 [2024-07-12 11:12:47.574775] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:21.527 [2024-07-12 11:12:47.574785] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:21.527 [2024-07-12 11:12:47.574794] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.527 [2024-07-12 11:12:47.574885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:21.527 [2024-07-12 11:12:47.575009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:21.527 [2024-07-12 11:12:47.575012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.785 11:12:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:21.785 11:12:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:08:21.785 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:21.785 11:12:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:21.785 11:12:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:21.785 11:12:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:21.785 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:21.785 11:12:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.785 11:12:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:21.785 [2024-07-12 11:12:47.722887] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:21.785 11:12:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.785 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:21.785 11:12:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.785 11:12:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:21.785 11:12:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.785 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:21.785 11:12:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.785 11:12:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:21.785 [2024-07-12 11:12:47.764039] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:21.785 11:12:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.785 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:21.785 11:12:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.785 11:12:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:21.785 NULL1 00:08:21.785 11:12:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.785 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=503467 00:08:21.785 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:21.785 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:08:21.785 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:21.786 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 503467 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.786 11:12:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:22.044 11:12:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.044 11:12:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 503467 00:08:22.044 11:12:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:22.044 11:12:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.044 11:12:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:22.609 11:12:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.609 11:12:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 503467 00:08:22.609 11:12:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:22.609 11:12:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.609 11:12:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:22.866 11:12:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.866 11:12:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 503467 00:08:22.866 11:12:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:22.866 11:12:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.866 11:12:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:23.123 11:12:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.123 11:12:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 503467 00:08:23.123 11:12:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:23.123 11:12:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.123 11:12:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:23.381 11:12:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.381 11:12:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 503467 00:08:23.381 11:12:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:23.381 11:12:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.381 11:12:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:23.638 11:12:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.638 11:12:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 503467 00:08:23.638 11:12:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:23.638 11:12:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.638 11:12:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:24.203 11:12:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.203 11:12:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 503467 00:08:24.203 11:12:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:24.203 11:12:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.203 11:12:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:24.461 11:12:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.461 11:12:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 503467 00:08:24.461 11:12:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:24.461 11:12:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.461 11:12:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:24.719 11:12:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.719 11:12:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 503467 00:08:24.719 11:12:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:24.719 11:12:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.719 11:12:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:24.976 11:12:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.976 11:12:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 503467 00:08:24.976 11:12:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:24.976 11:12:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.976 11:12:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:25.233 11:12:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.233 11:12:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 503467 00:08:25.233 11:12:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:25.233 11:12:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.233 11:12:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:25.798 11:12:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.798 11:12:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 503467 00:08:25.798 11:12:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:25.798 11:12:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.798 11:12:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:26.056 11:12:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.056 11:12:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 503467 00:08:26.056 11:12:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:26.056 11:12:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.056 11:12:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:26.313 11:12:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.313 11:12:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 503467 00:08:26.313 11:12:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:26.313 11:12:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.313 11:12:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:26.571 11:12:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.571 11:12:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 503467 00:08:26.571 11:12:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:26.571 11:12:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.571 11:12:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:27.134 11:12:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.134 11:12:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 503467 00:08:27.134 11:12:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:27.134 11:12:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.134 11:12:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:27.390 11:12:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.390 11:12:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 503467 00:08:27.390 11:12:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:27.390 11:12:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.390 11:12:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:27.647 11:12:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.647 11:12:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 503467 00:08:27.647 11:12:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:27.647 11:12:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.647 11:12:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:27.904 11:12:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.904 11:12:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 503467 00:08:27.904 11:12:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:27.904 11:12:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.904 11:12:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:28.160 11:12:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.160 11:12:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 503467 00:08:28.160 11:12:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:28.160 11:12:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.160 11:12:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:28.723 11:12:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.723 11:12:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 503467 00:08:28.723 11:12:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:28.723 11:12:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.723 11:12:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:28.979 11:12:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.979 11:12:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 503467 00:08:28.979 11:12:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:28.979 11:12:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.979 11:12:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:29.254 11:12:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.254 11:12:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 503467 00:08:29.254 11:12:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:29.254 11:12:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.254 11:12:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:29.511 11:12:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.511 11:12:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 503467 00:08:29.511 11:12:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:29.511 11:12:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.511 11:12:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:29.768 11:12:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.768 11:12:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 503467 00:08:29.768 11:12:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:29.768 11:12:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.768 11:12:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:30.330 11:12:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.330 11:12:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 503467 00:08:30.330 11:12:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:30.330 11:12:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.330 11:12:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:30.586 11:12:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.586 11:12:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 503467 00:08:30.586 11:12:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:30.586 11:12:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.586 11:12:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:30.843 11:12:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.843 11:12:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 503467 00:08:30.843 11:12:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:30.843 11:12:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.843 11:12:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:31.100 11:12:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.100 11:12:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 503467 00:08:31.100 11:12:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:31.100 11:12:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.100 11:12:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:31.357 11:12:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.357 11:12:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 503467 00:08:31.357 11:12:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:31.357 11:12:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.357 11:12:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:31.921 11:12:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.921 11:12:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 503467 00:08:31.921 11:12:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:31.921 11:12:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.921 11:12:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:31.921 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:32.179 11:12:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.179 11:12:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 503467 00:08:32.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (503467) - No such process 00:08:32.179 11:12:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 503467 00:08:32.179 11:12:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:32.179 11:12:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:32.179 11:12:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:08:32.179 11:12:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:32.179 11:12:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:08:32.179 11:12:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:32.179 11:12:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:08:32.179 11:12:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:32.179 11:12:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:32.179 rmmod nvme_tcp 00:08:32.179 rmmod nvme_fabrics 00:08:32.179 rmmod nvme_keyring 00:08:32.179 11:12:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:32.179 11:12:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:08:32.179 11:12:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:08:32.179 11:12:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 503414 ']' 00:08:32.179 11:12:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 503414 00:08:32.179 11:12:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 503414 ']' 00:08:32.179 11:12:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 503414 00:08:32.179 11:12:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:08:32.179 11:12:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:32.179 11:12:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 503414 00:08:32.179 11:12:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:32.179 11:12:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:32.179 11:12:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 503414' 00:08:32.179 killing process with pid 503414 00:08:32.179 11:12:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 503414 00:08:32.179 11:12:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 503414 00:08:32.438 11:12:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:32.438 11:12:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:32.438 11:12:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:32.438 11:12:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:32.438 11:12:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:32.438 11:12:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.438 11:12:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:32.438 11:12:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.342 11:13:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:34.342 00:08:34.342 real 0m15.424s 00:08:34.342 user 0m38.470s 00:08:34.342 sys 0m5.891s 00:08:34.342 11:13:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:34.342 11:13:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:34.342 ************************************ 00:08:34.342 END TEST nvmf_connect_stress 00:08:34.342 ************************************ 00:08:34.600 11:13:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:34.600 11:13:00 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:34.600 11:13:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:34.600 11:13:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:34.600 11:13:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:34.600 ************************************ 00:08:34.601 START TEST nvmf_fused_ordering 00:08:34.601 ************************************ 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:34.601 * Looking for test storage... 00:08:34.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:08:34.601 11:13:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:37.135 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:37.135 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:37.135 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:37.136 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:37.136 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:37.136 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:37.136 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:08:37.136 00:08:37.136 --- 10.0.0.2 ping statistics --- 00:08:37.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.136 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:37.136 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:37.136 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:08:37.136 00:08:37.136 --- 10.0.0.1 ping statistics --- 00:08:37.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.136 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=506708 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 506708 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 506708 ']' 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:37.136 11:13:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:37.136 [2024-07-12 11:13:02.887973] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:08:37.136 [2024-07-12 11:13:02.888054] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.136 EAL: No free 2048 kB hugepages reported on node 1 00:08:37.136 [2024-07-12 11:13:02.950813] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.136 [2024-07-12 11:13:03.059145] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:37.136 [2024-07-12 11:13:03.059222] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:37.136 [2024-07-12 11:13:03.059237] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:37.136 [2024-07-12 11:13:03.059248] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:37.136 [2024-07-12 11:13:03.059258] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:37.136 [2024-07-12 11:13:03.059295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.136 11:13:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:37.136 11:13:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:08:37.136 11:13:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:37.136 11:13:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:37.136 11:13:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:37.136 11:13:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:37.136 11:13:03 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:37.136 11:13:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.136 11:13:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:37.136 [2024-07-12 11:13:03.203103] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:37.136 11:13:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.136 11:13:03 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:37.136 11:13:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.136 11:13:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:37.136 11:13:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.136 11:13:03 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:37.136 11:13:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.136 11:13:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:37.136 [2024-07-12 11:13:03.219322] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:37.136 11:13:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.136 11:13:03 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:37.136 11:13:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.136 11:13:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:37.136 NULL1 00:08:37.137 11:13:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.137 11:13:03 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:08:37.137 11:13:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.137 11:13:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:37.137 11:13:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.137 11:13:03 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:37.137 11:13:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.137 11:13:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:37.137 11:13:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.137 11:13:03 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:37.137 [2024-07-12 11:13:03.263261] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:08:37.137 [2024-07-12 11:13:03.263317] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid506735 ] 00:08:37.396 EAL: No free 2048 kB hugepages reported on node 1 00:08:37.960 Attached to nqn.2016-06.io.spdk:cnode1 00:08:37.960 Namespace ID: 1 size: 1GB 00:08:37.960 fused_ordering(0) 00:08:37.960 fused_ordering(1) 00:08:37.961 fused_ordering(2) 00:08:37.961 fused_ordering(3) 00:08:37.961 fused_ordering(4) 00:08:37.961 fused_ordering(5) 00:08:37.961 fused_ordering(6) 00:08:37.961 fused_ordering(7) 00:08:37.961 fused_ordering(8) 00:08:37.961 fused_ordering(9) 00:08:37.961 fused_ordering(10) 00:08:37.961 fused_ordering(11) 00:08:37.961 fused_ordering(12) 00:08:37.961 fused_ordering(13) 00:08:37.961 fused_ordering(14) 00:08:37.961 fused_ordering(15) 00:08:37.961 fused_ordering(16) 00:08:37.961 fused_ordering(17) 00:08:37.961 fused_ordering(18) 00:08:37.961 fused_ordering(19) 00:08:37.961 fused_ordering(20) 00:08:37.961 fused_ordering(21) 00:08:37.961 fused_ordering(22) 00:08:37.961 fused_ordering(23) 00:08:37.961 fused_ordering(24) 00:08:37.961 fused_ordering(25) 00:08:37.961 fused_ordering(26) 00:08:37.961 fused_ordering(27) 00:08:37.961 fused_ordering(28) 00:08:37.961 fused_ordering(29) 00:08:37.961 fused_ordering(30) 00:08:37.961 fused_ordering(31) 00:08:37.961 fused_ordering(32) 00:08:37.961 fused_ordering(33) 00:08:37.961 fused_ordering(34) 00:08:37.961 fused_ordering(35) 00:08:37.961 fused_ordering(36) 00:08:37.961 fused_ordering(37) 00:08:37.961 fused_ordering(38) 00:08:37.961 fused_ordering(39) 00:08:37.961 fused_ordering(40) 00:08:37.961 fused_ordering(41) 00:08:37.961 fused_ordering(42) 00:08:37.961 fused_ordering(43) 00:08:37.961 fused_ordering(44) 00:08:37.961 fused_ordering(45) 00:08:37.961 fused_ordering(46) 00:08:37.961 fused_ordering(47) 00:08:37.961 fused_ordering(48) 00:08:37.961 fused_ordering(49) 00:08:37.961 fused_ordering(50) 00:08:37.961 fused_ordering(51) 00:08:37.961 fused_ordering(52) 00:08:37.961 fused_ordering(53) 00:08:37.961 fused_ordering(54) 00:08:37.961 fused_ordering(55) 00:08:37.961 fused_ordering(56) 00:08:37.961 fused_ordering(57) 00:08:37.961 fused_ordering(58) 00:08:37.961 fused_ordering(59) 00:08:37.961 fused_ordering(60) 00:08:37.961 fused_ordering(61) 00:08:37.961 fused_ordering(62) 00:08:37.961 fused_ordering(63) 00:08:37.961 fused_ordering(64) 00:08:37.961 fused_ordering(65) 00:08:37.961 fused_ordering(66) 00:08:37.961 fused_ordering(67) 00:08:37.961 fused_ordering(68) 00:08:37.961 fused_ordering(69) 00:08:37.961 fused_ordering(70) 00:08:37.961 fused_ordering(71) 00:08:37.961 fused_ordering(72) 00:08:37.961 fused_ordering(73) 00:08:37.961 fused_ordering(74) 00:08:37.961 fused_ordering(75) 00:08:37.961 fused_ordering(76) 00:08:37.961 fused_ordering(77) 00:08:37.961 fused_ordering(78) 00:08:37.961 fused_ordering(79) 00:08:37.961 fused_ordering(80) 00:08:37.961 fused_ordering(81) 00:08:37.961 fused_ordering(82) 00:08:37.961 fused_ordering(83) 00:08:37.961 fused_ordering(84) 00:08:37.961 fused_ordering(85) 00:08:37.961 fused_ordering(86) 00:08:37.961 fused_ordering(87) 00:08:37.961 fused_ordering(88) 00:08:37.961 fused_ordering(89) 00:08:37.961 fused_ordering(90) 00:08:37.961 fused_ordering(91) 00:08:37.961 fused_ordering(92) 00:08:37.961 fused_ordering(93) 00:08:37.961 fused_ordering(94) 00:08:37.961 fused_ordering(95) 00:08:37.961 fused_ordering(96) 00:08:37.961 fused_ordering(97) 00:08:37.961 fused_ordering(98) 00:08:37.961 fused_ordering(99) 00:08:37.961 fused_ordering(100) 00:08:37.961 fused_ordering(101) 00:08:37.961 fused_ordering(102) 00:08:37.961 fused_ordering(103) 00:08:37.961 fused_ordering(104) 00:08:37.961 fused_ordering(105) 00:08:37.961 fused_ordering(106) 00:08:37.961 fused_ordering(107) 00:08:37.961 fused_ordering(108) 00:08:37.961 fused_ordering(109) 00:08:37.961 fused_ordering(110) 00:08:37.961 fused_ordering(111) 00:08:37.961 fused_ordering(112) 00:08:37.961 fused_ordering(113) 00:08:37.961 fused_ordering(114) 00:08:37.961 fused_ordering(115) 00:08:37.961 fused_ordering(116) 00:08:37.961 fused_ordering(117) 00:08:37.961 fused_ordering(118) 00:08:37.961 fused_ordering(119) 00:08:37.961 fused_ordering(120) 00:08:37.961 fused_ordering(121) 00:08:37.961 fused_ordering(122) 00:08:37.961 fused_ordering(123) 00:08:37.961 fused_ordering(124) 00:08:37.961 fused_ordering(125) 00:08:37.961 fused_ordering(126) 00:08:37.961 fused_ordering(127) 00:08:37.961 fused_ordering(128) 00:08:37.961 fused_ordering(129) 00:08:37.961 fused_ordering(130) 00:08:37.961 fused_ordering(131) 00:08:37.961 fused_ordering(132) 00:08:37.961 fused_ordering(133) 00:08:37.961 fused_ordering(134) 00:08:37.961 fused_ordering(135) 00:08:37.961 fused_ordering(136) 00:08:37.961 fused_ordering(137) 00:08:37.961 fused_ordering(138) 00:08:37.961 fused_ordering(139) 00:08:37.961 fused_ordering(140) 00:08:37.961 fused_ordering(141) 00:08:37.961 fused_ordering(142) 00:08:37.961 fused_ordering(143) 00:08:37.961 fused_ordering(144) 00:08:37.961 fused_ordering(145) 00:08:37.961 fused_ordering(146) 00:08:37.961 fused_ordering(147) 00:08:37.961 fused_ordering(148) 00:08:37.961 fused_ordering(149) 00:08:37.961 fused_ordering(150) 00:08:37.961 fused_ordering(151) 00:08:37.961 fused_ordering(152) 00:08:37.961 fused_ordering(153) 00:08:37.961 fused_ordering(154) 00:08:37.961 fused_ordering(155) 00:08:37.961 fused_ordering(156) 00:08:37.961 fused_ordering(157) 00:08:37.961 fused_ordering(158) 00:08:37.961 fused_ordering(159) 00:08:37.961 fused_ordering(160) 00:08:37.961 fused_ordering(161) 00:08:37.961 fused_ordering(162) 00:08:37.961 fused_ordering(163) 00:08:37.961 fused_ordering(164) 00:08:37.961 fused_ordering(165) 00:08:37.961 fused_ordering(166) 00:08:37.961 fused_ordering(167) 00:08:37.961 fused_ordering(168) 00:08:37.961 fused_ordering(169) 00:08:37.961 fused_ordering(170) 00:08:37.961 fused_ordering(171) 00:08:37.961 fused_ordering(172) 00:08:37.961 fused_ordering(173) 00:08:37.961 fused_ordering(174) 00:08:37.961 fused_ordering(175) 00:08:37.961 fused_ordering(176) 00:08:37.961 fused_ordering(177) 00:08:37.961 fused_ordering(178) 00:08:37.961 fused_ordering(179) 00:08:37.961 fused_ordering(180) 00:08:37.961 fused_ordering(181) 00:08:37.961 fused_ordering(182) 00:08:37.961 fused_ordering(183) 00:08:37.961 fused_ordering(184) 00:08:37.961 fused_ordering(185) 00:08:37.961 fused_ordering(186) 00:08:37.961 fused_ordering(187) 00:08:37.961 fused_ordering(188) 00:08:37.961 fused_ordering(189) 00:08:37.961 fused_ordering(190) 00:08:37.961 fused_ordering(191) 00:08:37.961 fused_ordering(192) 00:08:37.961 fused_ordering(193) 00:08:37.961 fused_ordering(194) 00:08:37.961 fused_ordering(195) 00:08:37.961 fused_ordering(196) 00:08:37.961 fused_ordering(197) 00:08:37.961 fused_ordering(198) 00:08:37.961 fused_ordering(199) 00:08:37.961 fused_ordering(200) 00:08:37.961 fused_ordering(201) 00:08:37.961 fused_ordering(202) 00:08:37.961 fused_ordering(203) 00:08:37.961 fused_ordering(204) 00:08:37.961 fused_ordering(205) 00:08:38.218 fused_ordering(206) 00:08:38.218 fused_ordering(207) 00:08:38.218 fused_ordering(208) 00:08:38.218 fused_ordering(209) 00:08:38.218 fused_ordering(210) 00:08:38.218 fused_ordering(211) 00:08:38.218 fused_ordering(212) 00:08:38.218 fused_ordering(213) 00:08:38.218 fused_ordering(214) 00:08:38.218 fused_ordering(215) 00:08:38.218 fused_ordering(216) 00:08:38.218 fused_ordering(217) 00:08:38.218 fused_ordering(218) 00:08:38.218 fused_ordering(219) 00:08:38.218 fused_ordering(220) 00:08:38.218 fused_ordering(221) 00:08:38.218 fused_ordering(222) 00:08:38.218 fused_ordering(223) 00:08:38.218 fused_ordering(224) 00:08:38.218 fused_ordering(225) 00:08:38.218 fused_ordering(226) 00:08:38.218 fused_ordering(227) 00:08:38.218 fused_ordering(228) 00:08:38.218 fused_ordering(229) 00:08:38.218 fused_ordering(230) 00:08:38.218 fused_ordering(231) 00:08:38.218 fused_ordering(232) 00:08:38.218 fused_ordering(233) 00:08:38.218 fused_ordering(234) 00:08:38.218 fused_ordering(235) 00:08:38.218 fused_ordering(236) 00:08:38.219 fused_ordering(237) 00:08:38.219 fused_ordering(238) 00:08:38.219 fused_ordering(239) 00:08:38.219 fused_ordering(240) 00:08:38.219 fused_ordering(241) 00:08:38.219 fused_ordering(242) 00:08:38.219 fused_ordering(243) 00:08:38.219 fused_ordering(244) 00:08:38.219 fused_ordering(245) 00:08:38.219 fused_ordering(246) 00:08:38.219 fused_ordering(247) 00:08:38.219 fused_ordering(248) 00:08:38.219 fused_ordering(249) 00:08:38.219 fused_ordering(250) 00:08:38.219 fused_ordering(251) 00:08:38.219 fused_ordering(252) 00:08:38.219 fused_ordering(253) 00:08:38.219 fused_ordering(254) 00:08:38.219 fused_ordering(255) 00:08:38.219 fused_ordering(256) 00:08:38.219 fused_ordering(257) 00:08:38.219 fused_ordering(258) 00:08:38.219 fused_ordering(259) 00:08:38.219 fused_ordering(260) 00:08:38.219 fused_ordering(261) 00:08:38.219 fused_ordering(262) 00:08:38.219 fused_ordering(263) 00:08:38.219 fused_ordering(264) 00:08:38.219 fused_ordering(265) 00:08:38.219 fused_ordering(266) 00:08:38.219 fused_ordering(267) 00:08:38.219 fused_ordering(268) 00:08:38.219 fused_ordering(269) 00:08:38.219 fused_ordering(270) 00:08:38.219 fused_ordering(271) 00:08:38.219 fused_ordering(272) 00:08:38.219 fused_ordering(273) 00:08:38.219 fused_ordering(274) 00:08:38.219 fused_ordering(275) 00:08:38.219 fused_ordering(276) 00:08:38.219 fused_ordering(277) 00:08:38.219 fused_ordering(278) 00:08:38.219 fused_ordering(279) 00:08:38.219 fused_ordering(280) 00:08:38.219 fused_ordering(281) 00:08:38.219 fused_ordering(282) 00:08:38.219 fused_ordering(283) 00:08:38.219 fused_ordering(284) 00:08:38.219 fused_ordering(285) 00:08:38.219 fused_ordering(286) 00:08:38.219 fused_ordering(287) 00:08:38.219 fused_ordering(288) 00:08:38.219 fused_ordering(289) 00:08:38.219 fused_ordering(290) 00:08:38.219 fused_ordering(291) 00:08:38.219 fused_ordering(292) 00:08:38.219 fused_ordering(293) 00:08:38.219 fused_ordering(294) 00:08:38.219 fused_ordering(295) 00:08:38.219 fused_ordering(296) 00:08:38.219 fused_ordering(297) 00:08:38.219 fused_ordering(298) 00:08:38.219 fused_ordering(299) 00:08:38.219 fused_ordering(300) 00:08:38.219 fused_ordering(301) 00:08:38.219 fused_ordering(302) 00:08:38.219 fused_ordering(303) 00:08:38.219 fused_ordering(304) 00:08:38.219 fused_ordering(305) 00:08:38.219 fused_ordering(306) 00:08:38.219 fused_ordering(307) 00:08:38.219 fused_ordering(308) 00:08:38.219 fused_ordering(309) 00:08:38.219 fused_ordering(310) 00:08:38.219 fused_ordering(311) 00:08:38.219 fused_ordering(312) 00:08:38.219 fused_ordering(313) 00:08:38.219 fused_ordering(314) 00:08:38.219 fused_ordering(315) 00:08:38.219 fused_ordering(316) 00:08:38.219 fused_ordering(317) 00:08:38.219 fused_ordering(318) 00:08:38.219 fused_ordering(319) 00:08:38.219 fused_ordering(320) 00:08:38.219 fused_ordering(321) 00:08:38.219 fused_ordering(322) 00:08:38.219 fused_ordering(323) 00:08:38.219 fused_ordering(324) 00:08:38.219 fused_ordering(325) 00:08:38.219 fused_ordering(326) 00:08:38.219 fused_ordering(327) 00:08:38.219 fused_ordering(328) 00:08:38.219 fused_ordering(329) 00:08:38.219 fused_ordering(330) 00:08:38.219 fused_ordering(331) 00:08:38.219 fused_ordering(332) 00:08:38.219 fused_ordering(333) 00:08:38.219 fused_ordering(334) 00:08:38.219 fused_ordering(335) 00:08:38.219 fused_ordering(336) 00:08:38.219 fused_ordering(337) 00:08:38.219 fused_ordering(338) 00:08:38.219 fused_ordering(339) 00:08:38.219 fused_ordering(340) 00:08:38.219 fused_ordering(341) 00:08:38.219 fused_ordering(342) 00:08:38.219 fused_ordering(343) 00:08:38.219 fused_ordering(344) 00:08:38.219 fused_ordering(345) 00:08:38.219 fused_ordering(346) 00:08:38.219 fused_ordering(347) 00:08:38.219 fused_ordering(348) 00:08:38.219 fused_ordering(349) 00:08:38.219 fused_ordering(350) 00:08:38.219 fused_ordering(351) 00:08:38.219 fused_ordering(352) 00:08:38.219 fused_ordering(353) 00:08:38.219 fused_ordering(354) 00:08:38.219 fused_ordering(355) 00:08:38.219 fused_ordering(356) 00:08:38.219 fused_ordering(357) 00:08:38.219 fused_ordering(358) 00:08:38.219 fused_ordering(359) 00:08:38.219 fused_ordering(360) 00:08:38.219 fused_ordering(361) 00:08:38.219 fused_ordering(362) 00:08:38.219 fused_ordering(363) 00:08:38.219 fused_ordering(364) 00:08:38.219 fused_ordering(365) 00:08:38.219 fused_ordering(366) 00:08:38.219 fused_ordering(367) 00:08:38.219 fused_ordering(368) 00:08:38.219 fused_ordering(369) 00:08:38.219 fused_ordering(370) 00:08:38.219 fused_ordering(371) 00:08:38.219 fused_ordering(372) 00:08:38.219 fused_ordering(373) 00:08:38.219 fused_ordering(374) 00:08:38.219 fused_ordering(375) 00:08:38.219 fused_ordering(376) 00:08:38.219 fused_ordering(377) 00:08:38.219 fused_ordering(378) 00:08:38.219 fused_ordering(379) 00:08:38.219 fused_ordering(380) 00:08:38.219 fused_ordering(381) 00:08:38.219 fused_ordering(382) 00:08:38.219 fused_ordering(383) 00:08:38.219 fused_ordering(384) 00:08:38.219 fused_ordering(385) 00:08:38.219 fused_ordering(386) 00:08:38.219 fused_ordering(387) 00:08:38.219 fused_ordering(388) 00:08:38.219 fused_ordering(389) 00:08:38.219 fused_ordering(390) 00:08:38.219 fused_ordering(391) 00:08:38.219 fused_ordering(392) 00:08:38.219 fused_ordering(393) 00:08:38.219 fused_ordering(394) 00:08:38.219 fused_ordering(395) 00:08:38.219 fused_ordering(396) 00:08:38.219 fused_ordering(397) 00:08:38.219 fused_ordering(398) 00:08:38.219 fused_ordering(399) 00:08:38.219 fused_ordering(400) 00:08:38.219 fused_ordering(401) 00:08:38.219 fused_ordering(402) 00:08:38.219 fused_ordering(403) 00:08:38.219 fused_ordering(404) 00:08:38.219 fused_ordering(405) 00:08:38.219 fused_ordering(406) 00:08:38.219 fused_ordering(407) 00:08:38.219 fused_ordering(408) 00:08:38.219 fused_ordering(409) 00:08:38.219 fused_ordering(410) 00:08:38.476 fused_ordering(411) 00:08:38.476 fused_ordering(412) 00:08:38.476 fused_ordering(413) 00:08:38.476 fused_ordering(414) 00:08:38.476 fused_ordering(415) 00:08:38.476 fused_ordering(416) 00:08:38.476 fused_ordering(417) 00:08:38.476 fused_ordering(418) 00:08:38.476 fused_ordering(419) 00:08:38.476 fused_ordering(420) 00:08:38.476 fused_ordering(421) 00:08:38.476 fused_ordering(422) 00:08:38.476 fused_ordering(423) 00:08:38.476 fused_ordering(424) 00:08:38.476 fused_ordering(425) 00:08:38.476 fused_ordering(426) 00:08:38.476 fused_ordering(427) 00:08:38.476 fused_ordering(428) 00:08:38.476 fused_ordering(429) 00:08:38.476 fused_ordering(430) 00:08:38.476 fused_ordering(431) 00:08:38.476 fused_ordering(432) 00:08:38.476 fused_ordering(433) 00:08:38.476 fused_ordering(434) 00:08:38.476 fused_ordering(435) 00:08:38.476 fused_ordering(436) 00:08:38.476 fused_ordering(437) 00:08:38.476 fused_ordering(438) 00:08:38.476 fused_ordering(439) 00:08:38.476 fused_ordering(440) 00:08:38.476 fused_ordering(441) 00:08:38.476 fused_ordering(442) 00:08:38.476 fused_ordering(443) 00:08:38.476 fused_ordering(444) 00:08:38.476 fused_ordering(445) 00:08:38.476 fused_ordering(446) 00:08:38.476 fused_ordering(447) 00:08:38.476 fused_ordering(448) 00:08:38.476 fused_ordering(449) 00:08:38.476 fused_ordering(450) 00:08:38.476 fused_ordering(451) 00:08:38.476 fused_ordering(452) 00:08:38.476 fused_ordering(453) 00:08:38.476 fused_ordering(454) 00:08:38.476 fused_ordering(455) 00:08:38.476 fused_ordering(456) 00:08:38.476 fused_ordering(457) 00:08:38.476 fused_ordering(458) 00:08:38.476 fused_ordering(459) 00:08:38.476 fused_ordering(460) 00:08:38.476 fused_ordering(461) 00:08:38.476 fused_ordering(462) 00:08:38.476 fused_ordering(463) 00:08:38.476 fused_ordering(464) 00:08:38.476 fused_ordering(465) 00:08:38.476 fused_ordering(466) 00:08:38.476 fused_ordering(467) 00:08:38.476 fused_ordering(468) 00:08:38.476 fused_ordering(469) 00:08:38.476 fused_ordering(470) 00:08:38.476 fused_ordering(471) 00:08:38.476 fused_ordering(472) 00:08:38.476 fused_ordering(473) 00:08:38.476 fused_ordering(474) 00:08:38.476 fused_ordering(475) 00:08:38.476 fused_ordering(476) 00:08:38.476 fused_ordering(477) 00:08:38.476 fused_ordering(478) 00:08:38.476 fused_ordering(479) 00:08:38.476 fused_ordering(480) 00:08:38.476 fused_ordering(481) 00:08:38.476 fused_ordering(482) 00:08:38.476 fused_ordering(483) 00:08:38.476 fused_ordering(484) 00:08:38.476 fused_ordering(485) 00:08:38.476 fused_ordering(486) 00:08:38.476 fused_ordering(487) 00:08:38.476 fused_ordering(488) 00:08:38.476 fused_ordering(489) 00:08:38.476 fused_ordering(490) 00:08:38.476 fused_ordering(491) 00:08:38.476 fused_ordering(492) 00:08:38.476 fused_ordering(493) 00:08:38.476 fused_ordering(494) 00:08:38.476 fused_ordering(495) 00:08:38.476 fused_ordering(496) 00:08:38.476 fused_ordering(497) 00:08:38.476 fused_ordering(498) 00:08:38.476 fused_ordering(499) 00:08:38.476 fused_ordering(500) 00:08:38.476 fused_ordering(501) 00:08:38.476 fused_ordering(502) 00:08:38.476 fused_ordering(503) 00:08:38.477 fused_ordering(504) 00:08:38.477 fused_ordering(505) 00:08:38.477 fused_ordering(506) 00:08:38.477 fused_ordering(507) 00:08:38.477 fused_ordering(508) 00:08:38.477 fused_ordering(509) 00:08:38.477 fused_ordering(510) 00:08:38.477 fused_ordering(511) 00:08:38.477 fused_ordering(512) 00:08:38.477 fused_ordering(513) 00:08:38.477 fused_ordering(514) 00:08:38.477 fused_ordering(515) 00:08:38.477 fused_ordering(516) 00:08:38.477 fused_ordering(517) 00:08:38.477 fused_ordering(518) 00:08:38.477 fused_ordering(519) 00:08:38.477 fused_ordering(520) 00:08:38.477 fused_ordering(521) 00:08:38.477 fused_ordering(522) 00:08:38.477 fused_ordering(523) 00:08:38.477 fused_ordering(524) 00:08:38.477 fused_ordering(525) 00:08:38.477 fused_ordering(526) 00:08:38.477 fused_ordering(527) 00:08:38.477 fused_ordering(528) 00:08:38.477 fused_ordering(529) 00:08:38.477 fused_ordering(530) 00:08:38.477 fused_ordering(531) 00:08:38.477 fused_ordering(532) 00:08:38.477 fused_ordering(533) 00:08:38.477 fused_ordering(534) 00:08:38.477 fused_ordering(535) 00:08:38.477 fused_ordering(536) 00:08:38.477 fused_ordering(537) 00:08:38.477 fused_ordering(538) 00:08:38.477 fused_ordering(539) 00:08:38.477 fused_ordering(540) 00:08:38.477 fused_ordering(541) 00:08:38.477 fused_ordering(542) 00:08:38.477 fused_ordering(543) 00:08:38.477 fused_ordering(544) 00:08:38.477 fused_ordering(545) 00:08:38.477 fused_ordering(546) 00:08:38.477 fused_ordering(547) 00:08:38.477 fused_ordering(548) 00:08:38.477 fused_ordering(549) 00:08:38.477 fused_ordering(550) 00:08:38.477 fused_ordering(551) 00:08:38.477 fused_ordering(552) 00:08:38.477 fused_ordering(553) 00:08:38.477 fused_ordering(554) 00:08:38.477 fused_ordering(555) 00:08:38.477 fused_ordering(556) 00:08:38.477 fused_ordering(557) 00:08:38.477 fused_ordering(558) 00:08:38.477 fused_ordering(559) 00:08:38.477 fused_ordering(560) 00:08:38.477 fused_ordering(561) 00:08:38.477 fused_ordering(562) 00:08:38.477 fused_ordering(563) 00:08:38.477 fused_ordering(564) 00:08:38.477 fused_ordering(565) 00:08:38.477 fused_ordering(566) 00:08:38.477 fused_ordering(567) 00:08:38.477 fused_ordering(568) 00:08:38.477 fused_ordering(569) 00:08:38.477 fused_ordering(570) 00:08:38.477 fused_ordering(571) 00:08:38.477 fused_ordering(572) 00:08:38.477 fused_ordering(573) 00:08:38.477 fused_ordering(574) 00:08:38.477 fused_ordering(575) 00:08:38.477 fused_ordering(576) 00:08:38.477 fused_ordering(577) 00:08:38.477 fused_ordering(578) 00:08:38.477 fused_ordering(579) 00:08:38.477 fused_ordering(580) 00:08:38.477 fused_ordering(581) 00:08:38.477 fused_ordering(582) 00:08:38.477 fused_ordering(583) 00:08:38.477 fused_ordering(584) 00:08:38.477 fused_ordering(585) 00:08:38.477 fused_ordering(586) 00:08:38.477 fused_ordering(587) 00:08:38.477 fused_ordering(588) 00:08:38.477 fused_ordering(589) 00:08:38.477 fused_ordering(590) 00:08:38.477 fused_ordering(591) 00:08:38.477 fused_ordering(592) 00:08:38.477 fused_ordering(593) 00:08:38.477 fused_ordering(594) 00:08:38.477 fused_ordering(595) 00:08:38.477 fused_ordering(596) 00:08:38.477 fused_ordering(597) 00:08:38.477 fused_ordering(598) 00:08:38.477 fused_ordering(599) 00:08:38.477 fused_ordering(600) 00:08:38.477 fused_ordering(601) 00:08:38.477 fused_ordering(602) 00:08:38.477 fused_ordering(603) 00:08:38.477 fused_ordering(604) 00:08:38.477 fused_ordering(605) 00:08:38.477 fused_ordering(606) 00:08:38.477 fused_ordering(607) 00:08:38.477 fused_ordering(608) 00:08:38.477 fused_ordering(609) 00:08:38.477 fused_ordering(610) 00:08:38.477 fused_ordering(611) 00:08:38.477 fused_ordering(612) 00:08:38.477 fused_ordering(613) 00:08:38.477 fused_ordering(614) 00:08:38.477 fused_ordering(615) 00:08:39.042 fused_ordering(616) 00:08:39.042 fused_ordering(617) 00:08:39.042 fused_ordering(618) 00:08:39.042 fused_ordering(619) 00:08:39.042 fused_ordering(620) 00:08:39.042 fused_ordering(621) 00:08:39.042 fused_ordering(622) 00:08:39.042 fused_ordering(623) 00:08:39.042 fused_ordering(624) 00:08:39.042 fused_ordering(625) 00:08:39.042 fused_ordering(626) 00:08:39.042 fused_ordering(627) 00:08:39.042 fused_ordering(628) 00:08:39.042 fused_ordering(629) 00:08:39.042 fused_ordering(630) 00:08:39.042 fused_ordering(631) 00:08:39.042 fused_ordering(632) 00:08:39.042 fused_ordering(633) 00:08:39.042 fused_ordering(634) 00:08:39.042 fused_ordering(635) 00:08:39.042 fused_ordering(636) 00:08:39.042 fused_ordering(637) 00:08:39.042 fused_ordering(638) 00:08:39.042 fused_ordering(639) 00:08:39.042 fused_ordering(640) 00:08:39.042 fused_ordering(641) 00:08:39.042 fused_ordering(642) 00:08:39.042 fused_ordering(643) 00:08:39.042 fused_ordering(644) 00:08:39.042 fused_ordering(645) 00:08:39.042 fused_ordering(646) 00:08:39.042 fused_ordering(647) 00:08:39.042 fused_ordering(648) 00:08:39.042 fused_ordering(649) 00:08:39.042 fused_ordering(650) 00:08:39.042 fused_ordering(651) 00:08:39.042 fused_ordering(652) 00:08:39.042 fused_ordering(653) 00:08:39.042 fused_ordering(654) 00:08:39.042 fused_ordering(655) 00:08:39.042 fused_ordering(656) 00:08:39.042 fused_ordering(657) 00:08:39.042 fused_ordering(658) 00:08:39.042 fused_ordering(659) 00:08:39.042 fused_ordering(660) 00:08:39.042 fused_ordering(661) 00:08:39.042 fused_ordering(662) 00:08:39.042 fused_ordering(663) 00:08:39.042 fused_ordering(664) 00:08:39.042 fused_ordering(665) 00:08:39.042 fused_ordering(666) 00:08:39.042 fused_ordering(667) 00:08:39.042 fused_ordering(668) 00:08:39.042 fused_ordering(669) 00:08:39.042 fused_ordering(670) 00:08:39.042 fused_ordering(671) 00:08:39.042 fused_ordering(672) 00:08:39.042 fused_ordering(673) 00:08:39.042 fused_ordering(674) 00:08:39.042 fused_ordering(675) 00:08:39.042 fused_ordering(676) 00:08:39.042 fused_ordering(677) 00:08:39.042 fused_ordering(678) 00:08:39.042 fused_ordering(679) 00:08:39.042 fused_ordering(680) 00:08:39.042 fused_ordering(681) 00:08:39.042 fused_ordering(682) 00:08:39.042 fused_ordering(683) 00:08:39.042 fused_ordering(684) 00:08:39.042 fused_ordering(685) 00:08:39.042 fused_ordering(686) 00:08:39.042 fused_ordering(687) 00:08:39.042 fused_ordering(688) 00:08:39.042 fused_ordering(689) 00:08:39.042 fused_ordering(690) 00:08:39.042 fused_ordering(691) 00:08:39.042 fused_ordering(692) 00:08:39.042 fused_ordering(693) 00:08:39.042 fused_ordering(694) 00:08:39.042 fused_ordering(695) 00:08:39.042 fused_ordering(696) 00:08:39.042 fused_ordering(697) 00:08:39.042 fused_ordering(698) 00:08:39.042 fused_ordering(699) 00:08:39.042 fused_ordering(700) 00:08:39.042 fused_ordering(701) 00:08:39.042 fused_ordering(702) 00:08:39.042 fused_ordering(703) 00:08:39.042 fused_ordering(704) 00:08:39.042 fused_ordering(705) 00:08:39.042 fused_ordering(706) 00:08:39.042 fused_ordering(707) 00:08:39.042 fused_ordering(708) 00:08:39.042 fused_ordering(709) 00:08:39.042 fused_ordering(710) 00:08:39.042 fused_ordering(711) 00:08:39.042 fused_ordering(712) 00:08:39.042 fused_ordering(713) 00:08:39.042 fused_ordering(714) 00:08:39.042 fused_ordering(715) 00:08:39.042 fused_ordering(716) 00:08:39.042 fused_ordering(717) 00:08:39.042 fused_ordering(718) 00:08:39.042 fused_ordering(719) 00:08:39.042 fused_ordering(720) 00:08:39.042 fused_ordering(721) 00:08:39.042 fused_ordering(722) 00:08:39.042 fused_ordering(723) 00:08:39.042 fused_ordering(724) 00:08:39.042 fused_ordering(725) 00:08:39.042 fused_ordering(726) 00:08:39.042 fused_ordering(727) 00:08:39.042 fused_ordering(728) 00:08:39.042 fused_ordering(729) 00:08:39.042 fused_ordering(730) 00:08:39.042 fused_ordering(731) 00:08:39.043 fused_ordering(732) 00:08:39.043 fused_ordering(733) 00:08:39.043 fused_ordering(734) 00:08:39.043 fused_ordering(735) 00:08:39.043 fused_ordering(736) 00:08:39.043 fused_ordering(737) 00:08:39.043 fused_ordering(738) 00:08:39.043 fused_ordering(739) 00:08:39.043 fused_ordering(740) 00:08:39.043 fused_ordering(741) 00:08:39.043 fused_ordering(742) 00:08:39.043 fused_ordering(743) 00:08:39.043 fused_ordering(744) 00:08:39.043 fused_ordering(745) 00:08:39.043 fused_ordering(746) 00:08:39.043 fused_ordering(747) 00:08:39.043 fused_ordering(748) 00:08:39.043 fused_ordering(749) 00:08:39.043 fused_ordering(750) 00:08:39.043 fused_ordering(751) 00:08:39.043 fused_ordering(752) 00:08:39.043 fused_ordering(753) 00:08:39.043 fused_ordering(754) 00:08:39.043 fused_ordering(755) 00:08:39.043 fused_ordering(756) 00:08:39.043 fused_ordering(757) 00:08:39.043 fused_ordering(758) 00:08:39.043 fused_ordering(759) 00:08:39.043 fused_ordering(760) 00:08:39.043 fused_ordering(761) 00:08:39.043 fused_ordering(762) 00:08:39.043 fused_ordering(763) 00:08:39.043 fused_ordering(764) 00:08:39.043 fused_ordering(765) 00:08:39.043 fused_ordering(766) 00:08:39.043 fused_ordering(767) 00:08:39.043 fused_ordering(768) 00:08:39.043 fused_ordering(769) 00:08:39.043 fused_ordering(770) 00:08:39.043 fused_ordering(771) 00:08:39.043 fused_ordering(772) 00:08:39.043 fused_ordering(773) 00:08:39.043 fused_ordering(774) 00:08:39.043 fused_ordering(775) 00:08:39.043 fused_ordering(776) 00:08:39.043 fused_ordering(777) 00:08:39.043 fused_ordering(778) 00:08:39.043 fused_ordering(779) 00:08:39.043 fused_ordering(780) 00:08:39.043 fused_ordering(781) 00:08:39.043 fused_ordering(782) 00:08:39.043 fused_ordering(783) 00:08:39.043 fused_ordering(784) 00:08:39.043 fused_ordering(785) 00:08:39.043 fused_ordering(786) 00:08:39.043 fused_ordering(787) 00:08:39.043 fused_ordering(788) 00:08:39.043 fused_ordering(789) 00:08:39.043 fused_ordering(790) 00:08:39.043 fused_ordering(791) 00:08:39.043 fused_ordering(792) 00:08:39.043 fused_ordering(793) 00:08:39.043 fused_ordering(794) 00:08:39.043 fused_ordering(795) 00:08:39.043 fused_ordering(796) 00:08:39.043 fused_ordering(797) 00:08:39.043 fused_ordering(798) 00:08:39.043 fused_ordering(799) 00:08:39.043 fused_ordering(800) 00:08:39.043 fused_ordering(801) 00:08:39.043 fused_ordering(802) 00:08:39.043 fused_ordering(803) 00:08:39.043 fused_ordering(804) 00:08:39.043 fused_ordering(805) 00:08:39.043 fused_ordering(806) 00:08:39.043 fused_ordering(807) 00:08:39.043 fused_ordering(808) 00:08:39.043 fused_ordering(809) 00:08:39.043 fused_ordering(810) 00:08:39.043 fused_ordering(811) 00:08:39.043 fused_ordering(812) 00:08:39.043 fused_ordering(813) 00:08:39.043 fused_ordering(814) 00:08:39.043 fused_ordering(815) 00:08:39.043 fused_ordering(816) 00:08:39.043 fused_ordering(817) 00:08:39.043 fused_ordering(818) 00:08:39.043 fused_ordering(819) 00:08:39.043 fused_ordering(820) 00:08:39.608 fused_ordering(821) 00:08:39.608 fused_ordering(822) 00:08:39.608 fused_ordering(823) 00:08:39.608 fused_ordering(824) 00:08:39.608 fused_ordering(825) 00:08:39.608 fused_ordering(826) 00:08:39.608 fused_ordering(827) 00:08:39.608 fused_ordering(828) 00:08:39.608 fused_ordering(829) 00:08:39.608 fused_ordering(830) 00:08:39.608 fused_ordering(831) 00:08:39.608 fused_ordering(832) 00:08:39.608 fused_ordering(833) 00:08:39.608 fused_ordering(834) 00:08:39.608 fused_ordering(835) 00:08:39.608 fused_ordering(836) 00:08:39.608 fused_ordering(837) 00:08:39.608 fused_ordering(838) 00:08:39.608 fused_ordering(839) 00:08:39.608 fused_ordering(840) 00:08:39.608 fused_ordering(841) 00:08:39.608 fused_ordering(842) 00:08:39.608 fused_ordering(843) 00:08:39.608 fused_ordering(844) 00:08:39.608 fused_ordering(845) 00:08:39.608 fused_ordering(846) 00:08:39.608 fused_ordering(847) 00:08:39.608 fused_ordering(848) 00:08:39.608 fused_ordering(849) 00:08:39.608 fused_ordering(850) 00:08:39.608 fused_ordering(851) 00:08:39.608 fused_ordering(852) 00:08:39.608 fused_ordering(853) 00:08:39.608 fused_ordering(854) 00:08:39.608 fused_ordering(855) 00:08:39.608 fused_ordering(856) 00:08:39.608 fused_ordering(857) 00:08:39.608 fused_ordering(858) 00:08:39.608 fused_ordering(859) 00:08:39.608 fused_ordering(860) 00:08:39.608 fused_ordering(861) 00:08:39.608 fused_ordering(862) 00:08:39.608 fused_ordering(863) 00:08:39.608 fused_ordering(864) 00:08:39.608 fused_ordering(865) 00:08:39.608 fused_ordering(866) 00:08:39.608 fused_ordering(867) 00:08:39.608 fused_ordering(868) 00:08:39.608 fused_ordering(869) 00:08:39.608 fused_ordering(870) 00:08:39.608 fused_ordering(871) 00:08:39.608 fused_ordering(872) 00:08:39.608 fused_ordering(873) 00:08:39.608 fused_ordering(874) 00:08:39.608 fused_ordering(875) 00:08:39.608 fused_ordering(876) 00:08:39.608 fused_ordering(877) 00:08:39.608 fused_ordering(878) 00:08:39.608 fused_ordering(879) 00:08:39.608 fused_ordering(880) 00:08:39.608 fused_ordering(881) 00:08:39.608 fused_ordering(882) 00:08:39.608 fused_ordering(883) 00:08:39.608 fused_ordering(884) 00:08:39.608 fused_ordering(885) 00:08:39.608 fused_ordering(886) 00:08:39.608 fused_ordering(887) 00:08:39.608 fused_ordering(888) 00:08:39.608 fused_ordering(889) 00:08:39.608 fused_ordering(890) 00:08:39.608 fused_ordering(891) 00:08:39.608 fused_ordering(892) 00:08:39.608 fused_ordering(893) 00:08:39.608 fused_ordering(894) 00:08:39.608 fused_ordering(895) 00:08:39.608 fused_ordering(896) 00:08:39.608 fused_ordering(897) 00:08:39.608 fused_ordering(898) 00:08:39.608 fused_ordering(899) 00:08:39.608 fused_ordering(900) 00:08:39.608 fused_ordering(901) 00:08:39.608 fused_ordering(902) 00:08:39.608 fused_ordering(903) 00:08:39.608 fused_ordering(904) 00:08:39.608 fused_ordering(905) 00:08:39.608 fused_ordering(906) 00:08:39.608 fused_ordering(907) 00:08:39.608 fused_ordering(908) 00:08:39.608 fused_ordering(909) 00:08:39.608 fused_ordering(910) 00:08:39.608 fused_ordering(911) 00:08:39.608 fused_ordering(912) 00:08:39.608 fused_ordering(913) 00:08:39.608 fused_ordering(914) 00:08:39.608 fused_ordering(915) 00:08:39.608 fused_ordering(916) 00:08:39.608 fused_ordering(917) 00:08:39.608 fused_ordering(918) 00:08:39.608 fused_ordering(919) 00:08:39.608 fused_ordering(920) 00:08:39.608 fused_ordering(921) 00:08:39.608 fused_ordering(922) 00:08:39.608 fused_ordering(923) 00:08:39.608 fused_ordering(924) 00:08:39.608 fused_ordering(925) 00:08:39.608 fused_ordering(926) 00:08:39.608 fused_ordering(927) 00:08:39.608 fused_ordering(928) 00:08:39.608 fused_ordering(929) 00:08:39.608 fused_ordering(930) 00:08:39.608 fused_ordering(931) 00:08:39.608 fused_ordering(932) 00:08:39.608 fused_ordering(933) 00:08:39.608 fused_ordering(934) 00:08:39.608 fused_ordering(935) 00:08:39.608 fused_ordering(936) 00:08:39.608 fused_ordering(937) 00:08:39.608 fused_ordering(938) 00:08:39.608 fused_ordering(939) 00:08:39.608 fused_ordering(940) 00:08:39.608 fused_ordering(941) 00:08:39.608 fused_ordering(942) 00:08:39.608 fused_ordering(943) 00:08:39.608 fused_ordering(944) 00:08:39.608 fused_ordering(945) 00:08:39.608 fused_ordering(946) 00:08:39.608 fused_ordering(947) 00:08:39.608 fused_ordering(948) 00:08:39.608 fused_ordering(949) 00:08:39.608 fused_ordering(950) 00:08:39.608 fused_ordering(951) 00:08:39.608 fused_ordering(952) 00:08:39.608 fused_ordering(953) 00:08:39.608 fused_ordering(954) 00:08:39.608 fused_ordering(955) 00:08:39.608 fused_ordering(956) 00:08:39.608 fused_ordering(957) 00:08:39.608 fused_ordering(958) 00:08:39.608 fused_ordering(959) 00:08:39.608 fused_ordering(960) 00:08:39.608 fused_ordering(961) 00:08:39.608 fused_ordering(962) 00:08:39.608 fused_ordering(963) 00:08:39.608 fused_ordering(964) 00:08:39.608 fused_ordering(965) 00:08:39.608 fused_ordering(966) 00:08:39.608 fused_ordering(967) 00:08:39.608 fused_ordering(968) 00:08:39.608 fused_ordering(969) 00:08:39.608 fused_ordering(970) 00:08:39.608 fused_ordering(971) 00:08:39.608 fused_ordering(972) 00:08:39.608 fused_ordering(973) 00:08:39.608 fused_ordering(974) 00:08:39.608 fused_ordering(975) 00:08:39.608 fused_ordering(976) 00:08:39.608 fused_ordering(977) 00:08:39.608 fused_ordering(978) 00:08:39.608 fused_ordering(979) 00:08:39.608 fused_ordering(980) 00:08:39.608 fused_ordering(981) 00:08:39.608 fused_ordering(982) 00:08:39.608 fused_ordering(983) 00:08:39.608 fused_ordering(984) 00:08:39.608 fused_ordering(985) 00:08:39.608 fused_ordering(986) 00:08:39.608 fused_ordering(987) 00:08:39.608 fused_ordering(988) 00:08:39.608 fused_ordering(989) 00:08:39.608 fused_ordering(990) 00:08:39.608 fused_ordering(991) 00:08:39.608 fused_ordering(992) 00:08:39.608 fused_ordering(993) 00:08:39.608 fused_ordering(994) 00:08:39.608 fused_ordering(995) 00:08:39.608 fused_ordering(996) 00:08:39.608 fused_ordering(997) 00:08:39.608 fused_ordering(998) 00:08:39.608 fused_ordering(999) 00:08:39.608 fused_ordering(1000) 00:08:39.608 fused_ordering(1001) 00:08:39.608 fused_ordering(1002) 00:08:39.608 fused_ordering(1003) 00:08:39.608 fused_ordering(1004) 00:08:39.608 fused_ordering(1005) 00:08:39.608 fused_ordering(1006) 00:08:39.608 fused_ordering(1007) 00:08:39.608 fused_ordering(1008) 00:08:39.608 fused_ordering(1009) 00:08:39.608 fused_ordering(1010) 00:08:39.608 fused_ordering(1011) 00:08:39.608 fused_ordering(1012) 00:08:39.608 fused_ordering(1013) 00:08:39.608 fused_ordering(1014) 00:08:39.608 fused_ordering(1015) 00:08:39.608 fused_ordering(1016) 00:08:39.608 fused_ordering(1017) 00:08:39.608 fused_ordering(1018) 00:08:39.608 fused_ordering(1019) 00:08:39.608 fused_ordering(1020) 00:08:39.608 fused_ordering(1021) 00:08:39.608 fused_ordering(1022) 00:08:39.608 fused_ordering(1023) 00:08:39.608 11:13:05 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:08:39.608 11:13:05 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:08:39.608 11:13:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:39.608 11:13:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:08:39.608 11:13:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:39.608 11:13:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:08:39.608 11:13:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:39.608 11:13:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:39.608 rmmod nvme_tcp 00:08:39.608 rmmod nvme_fabrics 00:08:39.608 rmmod nvme_keyring 00:08:39.608 11:13:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:39.608 11:13:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:08:39.608 11:13:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:08:39.608 11:13:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 506708 ']' 00:08:39.608 11:13:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 506708 00:08:39.608 11:13:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 506708 ']' 00:08:39.608 11:13:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 506708 00:08:39.608 11:13:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:08:39.608 11:13:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:39.608 11:13:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 506708 00:08:39.866 11:13:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:39.866 11:13:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:39.866 11:13:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 506708' 00:08:39.866 killing process with pid 506708 00:08:39.866 11:13:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 506708 00:08:39.866 11:13:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 506708 00:08:40.126 11:13:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:40.126 11:13:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:40.126 11:13:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:40.126 11:13:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:40.126 11:13:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:40.126 11:13:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.126 11:13:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:40.126 11:13:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.123 11:13:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:42.123 00:08:42.123 real 0m7.529s 00:08:42.123 user 0m4.748s 00:08:42.123 sys 0m3.269s 00:08:42.123 11:13:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:42.123 11:13:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:42.123 ************************************ 00:08:42.123 END TEST nvmf_fused_ordering 00:08:42.123 ************************************ 00:08:42.123 11:13:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:42.123 11:13:08 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:42.123 11:13:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:42.123 11:13:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.123 11:13:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:42.123 ************************************ 00:08:42.123 START TEST nvmf_delete_subsystem 00:08:42.123 ************************************ 00:08:42.123 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:42.123 * Looking for test storage... 00:08:42.123 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:42.123 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:42.123 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:42.123 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.123 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.123 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.123 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.123 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.123 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.123 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.123 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.123 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.123 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.123 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:42.123 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:42.123 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.123 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.123 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:42.123 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:42.123 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:42.123 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.123 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.123 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.124 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.124 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.124 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.124 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:42.124 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.124 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:08:42.124 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:42.124 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:42.124 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:42.124 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.124 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.124 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:42.124 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:42.124 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:42.124 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:42.124 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:42.124 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:42.124 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:42.124 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:42.124 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:42.124 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.124 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:42.124 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.124 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:42.124 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:42.124 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:42.124 11:13:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:44.655 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:44.655 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:44.655 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:44.655 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:44.655 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:44.655 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:44.655 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:44.655 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:44.655 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:44.655 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:08:44.655 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:44.655 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:08:44.655 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:44.655 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:08:44.655 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:44.655 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:44.655 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:44.655 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:44.655 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:44.655 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:44.655 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:44.655 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:44.655 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:44.655 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:44.655 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:44.655 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:44.655 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:44.655 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:44.655 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:44.655 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:44.655 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:44.655 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:44.655 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:44.655 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:44.655 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:44.655 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:44.655 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:44.655 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:44.655 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:44.655 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:44.655 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:44.656 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:44.656 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:44.656 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:44.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:44.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:08:44.656 00:08:44.656 --- 10.0.0.2 ping statistics --- 00:08:44.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.656 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:44.656 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:44.656 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:08:44.656 00:08:44.656 --- 10.0.0.1 ping statistics --- 00:08:44.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.656 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=508939 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 508939 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 508939 ']' 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:44.656 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:44.656 [2024-07-12 11:13:10.507469] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:08:44.656 [2024-07-12 11:13:10.507547] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.656 EAL: No free 2048 kB hugepages reported on node 1 00:08:44.656 [2024-07-12 11:13:10.573259] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:44.656 [2024-07-12 11:13:10.676909] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:44.656 [2024-07-12 11:13:10.676983] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:44.656 [2024-07-12 11:13:10.676997] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:44.657 [2024-07-12 11:13:10.677007] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:44.657 [2024-07-12 11:13:10.677016] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:44.657 [2024-07-12 11:13:10.677080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.657 [2024-07-12 11:13:10.677085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.914 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:44.914 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:08:44.914 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:44.914 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:44.914 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:44.914 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:44.914 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:44.914 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.914 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:44.914 [2024-07-12 11:13:10.823370] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:44.914 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.914 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:44.914 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.914 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:44.914 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.914 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:44.914 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.914 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:44.914 [2024-07-12 11:13:10.839551] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:44.914 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.914 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:44.914 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.914 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:44.914 NULL1 00:08:44.914 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.914 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:44.914 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.914 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:44.914 Delay0 00:08:44.914 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.914 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:44.914 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.914 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:44.914 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.914 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=509074 00:08:44.914 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:44.914 11:13:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:44.914 EAL: No free 2048 kB hugepages reported on node 1 00:08:44.914 [2024-07-12 11:13:10.924322] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:46.807 11:13:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:46.807 11:13:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.807 11:13:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 starting I/O failed: -6 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 starting I/O failed: -6 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Write completed with error (sct=0, sc=8) 00:08:47.064 starting I/O failed: -6 00:08:47.064 Write completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 starting I/O failed: -6 00:08:47.064 Write completed with error (sct=0, sc=8) 00:08:47.064 Write completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 starting I/O failed: -6 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Write completed with error (sct=0, sc=8) 00:08:47.064 Write completed with error (sct=0, sc=8) 00:08:47.064 Write completed with error (sct=0, sc=8) 00:08:47.064 starting I/O failed: -6 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Write completed with error (sct=0, sc=8) 00:08:47.064 Write completed with error (sct=0, sc=8) 00:08:47.064 starting I/O failed: -6 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 starting I/O failed: -6 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 starting I/O failed: -6 00:08:47.064 Write completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Write completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 starting I/O failed: -6 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 starting I/O failed: -6 00:08:47.064 Write completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 starting I/O failed: -6 00:08:47.064 Write completed with error (sct=0, sc=8) 00:08:47.064 Write completed with error (sct=0, sc=8) 00:08:47.064 Write completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 starting I/O failed: -6 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Write completed with error (sct=0, sc=8) 00:08:47.064 starting I/O failed: -6 00:08:47.064 Write completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 starting I/O failed: -6 00:08:47.064 Write completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Write completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 starting I/O failed: -6 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Write completed with error (sct=0, sc=8) 00:08:47.064 starting I/O failed: -6 00:08:47.064 Write completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 starting I/O failed: -6 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 starting I/O failed: -6 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Write completed with error (sct=0, sc=8) 00:08:47.064 Write completed with error (sct=0, sc=8) 00:08:47.064 starting I/O failed: -6 00:08:47.064 Write completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Write completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 starting I/O failed: -6 00:08:47.064 [2024-07-12 11:13:12.966961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12377a0 is same with the state(5) to be set 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Write completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Write completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Write completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Write completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Write completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Write completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Write completed with error (sct=0, sc=8) 00:08:47.064 Write completed with error (sct=0, sc=8) 00:08:47.064 Write completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Write completed with error (sct=0, sc=8) 00:08:47.064 Write completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.064 Write completed with error (sct=0, sc=8) 00:08:47.064 Write completed with error (sct=0, sc=8) 00:08:47.064 Write completed with error (sct=0, sc=8) 00:08:47.064 Write completed with error (sct=0, sc=8) 00:08:47.064 Read completed with error (sct=0, sc=8) 00:08:47.065 Read completed with error (sct=0, sc=8) 00:08:47.065 Read completed with error (sct=0, sc=8) 00:08:47.065 [2024-07-12 11:13:12.967336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fccd400cfe0 is same with the state(5) to be set 00:08:47.065 Read completed with error (sct=0, sc=8) 00:08:47.065 Read completed with error (sct=0, sc=8) 00:08:47.065 Write completed with error (sct=0, sc=8) 00:08:47.065 Read completed with error (sct=0, sc=8) 00:08:47.065 Read completed with error (sct=0, sc=8) 00:08:47.065 Read completed with error (sct=0, sc=8) 00:08:47.065 Read completed with error (sct=0, sc=8) 00:08:47.065 Read completed with error (sct=0, sc=8) 00:08:47.065 Write completed with error (sct=0, sc=8) 00:08:47.065 Write completed with error (sct=0, sc=8) 00:08:47.065 Read completed with error (sct=0, sc=8) 00:08:47.065 Read completed with error (sct=0, sc=8) 00:08:47.065 Read completed with error (sct=0, sc=8) 00:08:47.065 Read completed with error (sct=0, sc=8) 00:08:47.065 Write completed with error (sct=0, sc=8) 00:08:47.065 Write completed with error (sct=0, sc=8) 00:08:47.065 Read completed with error (sct=0, sc=8) 00:08:47.065 Write completed with error (sct=0, sc=8) 00:08:47.065 Read completed with error (sct=0, sc=8) 00:08:47.065 Write completed with error (sct=0, sc=8) 00:08:47.065 Read completed with error (sct=0, sc=8) 00:08:47.065 Read completed with error (sct=0, sc=8) 00:08:47.065 Read completed with error (sct=0, sc=8) 00:08:47.065 Read completed with error (sct=0, sc=8) 00:08:47.065 Write completed with error (sct=0, sc=8) 00:08:47.065 Write completed with error (sct=0, sc=8) 00:08:47.065 Read completed with error (sct=0, sc=8) 00:08:47.065 Read completed with error (sct=0, sc=8) 00:08:47.065 Write completed with error (sct=0, sc=8) 00:08:47.065 Read completed with error (sct=0, sc=8) 00:08:47.065 Read completed with error (sct=0, sc=8) 00:08:47.065 Write completed with error (sct=0, sc=8) 00:08:47.065 Write completed with error (sct=0, sc=8) 00:08:47.065 Read completed with error (sct=0, sc=8) 00:08:47.065 Read completed with error (sct=0, sc=8) 00:08:47.065 Read completed with error (sct=0, sc=8) 00:08:47.065 Write completed with error (sct=0, sc=8) 00:08:47.065 Read completed with error (sct=0, sc=8) 00:08:47.065 Read completed with error (sct=0, sc=8) 00:08:47.065 Write completed with error (sct=0, sc=8) 00:08:47.065 Read completed with error (sct=0, sc=8) 00:08:47.065 Read completed with error (sct=0, sc=8) 00:08:47.065 Read completed with error (sct=0, sc=8) 00:08:47.065 Read completed with error (sct=0, sc=8) 00:08:47.065 Read completed with error (sct=0, sc=8) 00:08:47.065 [2024-07-12 11:13:12.967643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fccd400d600 is same with the state(5) to be set 00:08:47.995 [2024-07-12 11:13:13.938430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1238ac0 is same with the state(5) to be set 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Write completed with error (sct=0, sc=8) 00:08:47.995 Write completed with error (sct=0, sc=8) 00:08:47.995 Write completed with error (sct=0, sc=8) 00:08:47.995 Write completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Write completed with error (sct=0, sc=8) 00:08:47.995 Write completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 [2024-07-12 11:13:13.969501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12375c0 is same with the state(5) to be set 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Write completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Write completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Write completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Write completed with error (sct=0, sc=8) 00:08:47.995 Write completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Write completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Write completed with error (sct=0, sc=8) 00:08:47.995 Write completed with error (sct=0, sc=8) 00:08:47.995 Write completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 [2024-07-12 11:13:13.970827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1237980 is same with the state(5) to be set 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Write completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Write completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.995 Read completed with error (sct=0, sc=8) 00:08:47.996 Read completed with error (sct=0, sc=8) 00:08:47.996 Write completed with error (sct=0, sc=8) 00:08:47.996 Write completed with error (sct=0, sc=8) 00:08:47.996 Read completed with error (sct=0, sc=8) 00:08:47.996 Read completed with error (sct=0, sc=8) 00:08:47.996 Write completed with error (sct=0, sc=8) 00:08:47.996 Read completed with error (sct=0, sc=8) 00:08:47.996 Read completed with error (sct=0, sc=8) 00:08:47.996 Write completed with error (sct=0, sc=8) 00:08:47.996 Read completed with error (sct=0, sc=8) 00:08:47.996 Write completed with error (sct=0, sc=8) 00:08:47.996 Read completed with error (sct=0, sc=8) 00:08:47.996 [2024-07-12 11:13:13.971122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12373e0 is same with the state(5) to be set 00:08:47.996 Write completed with error (sct=0, sc=8) 00:08:47.996 Write completed with error (sct=0, sc=8) 00:08:47.996 Write completed with error (sct=0, sc=8) 00:08:47.996 Read completed with error (sct=0, sc=8) 00:08:47.996 Read completed with error (sct=0, sc=8) 00:08:47.996 Write completed with error (sct=0, sc=8) 00:08:47.996 Read completed with error (sct=0, sc=8) 00:08:47.996 Read completed with error (sct=0, sc=8) 00:08:47.996 Read completed with error (sct=0, sc=8) 00:08:47.996 Read completed with error (sct=0, sc=8) 00:08:47.996 Read completed with error (sct=0, sc=8) 00:08:47.996 Read completed with error (sct=0, sc=8) 00:08:47.996 Read completed with error (sct=0, sc=8) 00:08:47.996 Read completed with error (sct=0, sc=8) 00:08:47.996 [2024-07-12 11:13:13.971257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fccd400d2f0 is same with the state(5) to be set 00:08:47.996 Initializing NVMe Controllers 00:08:47.996 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:47.996 Controller IO queue size 128, less than required. 00:08:47.996 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:47.996 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:47.996 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:47.996 Initialization complete. Launching workers. 00:08:47.996 ======================================================== 00:08:47.996 Latency(us) 00:08:47.996 Device Information : IOPS MiB/s Average min max 00:08:47.996 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 176.09 0.09 960942.55 569.30 1014943.05 00:08:47.996 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 149.80 0.07 900989.36 595.18 1014448.86 00:08:47.996 ======================================================== 00:08:47.996 Total : 325.88 0.16 933384.16 569.30 1014943.05 00:08:47.996 00:08:47.996 [2024-07-12 11:13:13.972187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1238ac0 (9): Bad file descriptor 00:08:47.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:47.996 11:13:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.996 11:13:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:47.996 11:13:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 509074 00:08:47.996 11:13:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:48.560 11:13:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:48.560 11:13:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 509074 00:08:48.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (509074) - No such process 00:08:48.560 11:13:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 509074 00:08:48.560 11:13:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:08:48.560 11:13:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 509074 00:08:48.560 11:13:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:08:48.560 11:13:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:48.560 11:13:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:08:48.560 11:13:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:48.560 11:13:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 509074 00:08:48.560 11:13:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:08:48.560 11:13:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:48.560 11:13:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:48.560 11:13:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:48.560 11:13:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:48.560 11:13:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.560 11:13:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:48.560 11:13:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.560 11:13:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:48.560 11:13:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.560 11:13:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:48.560 [2024-07-12 11:13:14.490744] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:48.560 11:13:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.560 11:13:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:48.560 11:13:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.560 11:13:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:48.560 11:13:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.560 11:13:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=509485 00:08:48.560 11:13:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:48.560 11:13:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:48.560 11:13:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 509485 00:08:48.561 11:13:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:48.561 EAL: No free 2048 kB hugepages reported on node 1 00:08:48.561 [2024-07-12 11:13:14.547629] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:49.126 11:13:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:49.126 11:13:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 509485 00:08:49.126 11:13:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:49.383 11:13:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:49.383 11:13:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 509485 00:08:49.383 11:13:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:49.947 11:13:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:49.947 11:13:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 509485 00:08:49.947 11:13:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:50.510 11:13:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:50.510 11:13:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 509485 00:08:50.510 11:13:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:51.075 11:13:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:51.075 11:13:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 509485 00:08:51.075 11:13:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:51.639 11:13:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:51.639 11:13:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 509485 00:08:51.639 11:13:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:51.639 Initializing NVMe Controllers 00:08:51.639 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:51.639 Controller IO queue size 128, less than required. 00:08:51.639 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:51.639 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:51.639 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:51.639 Initialization complete. Launching workers. 00:08:51.639 ======================================================== 00:08:51.639 Latency(us) 00:08:51.639 Device Information : IOPS MiB/s Average min max 00:08:51.639 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003179.37 1000190.19 1010776.53 00:08:51.639 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005140.34 1000167.69 1011498.09 00:08:51.639 ======================================================== 00:08:51.639 Total : 256.00 0.12 1004159.86 1000167.69 1011498.09 00:08:51.639 00:08:51.896 11:13:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:51.896 11:13:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 509485 00:08:51.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (509485) - No such process 00:08:51.896 11:13:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 509485 00:08:51.896 11:13:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:51.896 11:13:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:51.896 11:13:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:51.896 11:13:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:08:51.896 11:13:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:51.896 11:13:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:08:51.896 11:13:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:51.896 11:13:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:51.896 rmmod nvme_tcp 00:08:52.154 rmmod nvme_fabrics 00:08:52.154 rmmod nvme_keyring 00:08:52.154 11:13:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:52.154 11:13:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:08:52.154 11:13:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:08:52.154 11:13:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 508939 ']' 00:08:52.154 11:13:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 508939 00:08:52.154 11:13:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 508939 ']' 00:08:52.154 11:13:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 508939 00:08:52.154 11:13:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:08:52.154 11:13:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:52.154 11:13:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 508939 00:08:52.154 11:13:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:52.154 11:13:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:52.154 11:13:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 508939' 00:08:52.154 killing process with pid 508939 00:08:52.154 11:13:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 508939 00:08:52.154 11:13:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 508939 00:08:52.412 11:13:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:52.412 11:13:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:52.412 11:13:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:52.412 11:13:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:52.412 11:13:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:52.412 11:13:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.412 11:13:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:52.412 11:13:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.317 11:13:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:54.317 00:08:54.317 real 0m12.313s 00:08:54.317 user 0m27.370s 00:08:54.317 sys 0m3.103s 00:08:54.317 11:13:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:54.317 11:13:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:54.317 ************************************ 00:08:54.317 END TEST nvmf_delete_subsystem 00:08:54.317 ************************************ 00:08:54.317 11:13:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:54.317 11:13:20 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:08:54.317 11:13:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:54.317 11:13:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:54.317 11:13:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:54.575 ************************************ 00:08:54.575 START TEST nvmf_ns_masking 00:08:54.575 ************************************ 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:08:54.575 * Looking for test storage... 00:08:54.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=b1de359e-445e-43d3-bb1c-c3b12fab71dc 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=126ff996-15f0-466b-adb6-cd67783a5b2b 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=668762ce-c178-4e7b-b936-ae93938b65b1 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:08:54.575 11:13:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:56.477 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:56.477 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:56.477 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:56.477 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:56.477 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:56.735 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:56.735 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:56.735 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:56.735 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:56.735 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:56.735 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:56.735 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:56.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:56.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:08:56.735 00:08:56.735 --- 10.0.0.2 ping statistics --- 00:08:56.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.736 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:08:56.736 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:56.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:56.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:08:56.736 00:08:56.736 --- 10.0.0.1 ping statistics --- 00:08:56.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.736 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:08:56.736 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:56.736 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:08:56.736 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:56.736 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:56.736 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:56.736 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:56.736 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:56.736 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:56.736 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:56.736 11:13:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:08:56.736 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:56.736 11:13:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:56.736 11:13:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:08:56.736 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=511831 00:08:56.736 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:08:56.736 11:13:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 511831 00:08:56.736 11:13:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 511831 ']' 00:08:56.736 11:13:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.736 11:13:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:56.736 11:13:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.736 11:13:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:56.736 11:13:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:08:56.736 [2024-07-12 11:13:22.790722] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:08:56.736 [2024-07-12 11:13:22.790817] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:56.736 EAL: No free 2048 kB hugepages reported on node 1 00:08:56.736 [2024-07-12 11:13:22.854070] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.997 [2024-07-12 11:13:22.963595] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:56.997 [2024-07-12 11:13:22.963645] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:56.997 [2024-07-12 11:13:22.963674] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:56.997 [2024-07-12 11:13:22.963685] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:56.997 [2024-07-12 11:13:22.963695] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:56.997 [2024-07-12 11:13:22.963718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.997 11:13:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:56.997 11:13:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:08:56.997 11:13:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:56.997 11:13:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:56.997 11:13:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:08:56.997 11:13:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:56.997 11:13:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:57.256 [2024-07-12 11:13:23.334562] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:57.256 11:13:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:08:57.256 11:13:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:08:57.256 11:13:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:08:57.821 Malloc1 00:08:57.821 11:13:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:08:58.093 Malloc2 00:08:58.093 11:13:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:58.350 11:13:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:08:58.608 11:13:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:58.865 [2024-07-12 11:13:24.752905] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:58.865 11:13:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:08:58.865 11:13:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 668762ce-c178-4e7b-b936-ae93938b65b1 -a 10.0.0.2 -s 4420 -i 4 00:08:58.865 11:13:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:08:58.865 11:13:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:08:58.865 11:13:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:58.865 11:13:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:58.865 11:13:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:01.389 11:13:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:01.389 11:13:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:01.389 11:13:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:01.389 11:13:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:01.389 11:13:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:01.389 11:13:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:01.389 11:13:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:01.389 11:13:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:01.389 11:13:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:01.389 11:13:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:01.389 11:13:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:09:01.389 11:13:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:01.389 11:13:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:01.389 [ 0]:0x1 00:09:01.389 11:13:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:01.389 11:13:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:01.389 11:13:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7af7c6d1ff0849d895d0efaf3d2c21ae 00:09:01.389 11:13:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7af7c6d1ff0849d895d0efaf3d2c21ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:01.389 11:13:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:09:01.389 11:13:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:09:01.389 11:13:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:01.389 11:13:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:01.389 [ 0]:0x1 00:09:01.389 11:13:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:01.389 11:13:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:01.389 11:13:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7af7c6d1ff0849d895d0efaf3d2c21ae 00:09:01.389 11:13:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7af7c6d1ff0849d895d0efaf3d2c21ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:01.389 11:13:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:09:01.389 11:13:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:01.389 11:13:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:01.389 [ 1]:0x2 00:09:01.389 11:13:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:01.389 11:13:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:01.389 11:13:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b998931afc4a4928954a6d0171a4c444 00:09:01.389 11:13:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b998931afc4a4928954a6d0171a4c444 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:01.389 11:13:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:09:01.389 11:13:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:01.389 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.389 11:13:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.646 11:13:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:09:01.904 11:13:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:09:01.904 11:13:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 668762ce-c178-4e7b-b936-ae93938b65b1 -a 10.0.0.2 -s 4420 -i 4 00:09:02.162 11:13:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:09:02.162 11:13:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:02.162 11:13:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:02.162 11:13:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:09:02.162 11:13:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:09:02.162 11:13:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:04.055 11:13:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:04.055 11:13:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:04.055 11:13:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:04.055 11:13:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:04.055 11:13:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:04.055 11:13:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:04.055 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:04.055 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:04.055 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:04.055 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:04.055 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:09:04.055 11:13:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:04.055 11:13:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:04.055 11:13:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:04.055 11:13:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:04.055 11:13:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:04.055 11:13:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:04.055 11:13:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:04.055 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:04.055 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:04.312 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:04.312 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:04.312 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:04.312 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:04.312 11:13:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:04.312 11:13:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:04.312 11:13:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:04.312 11:13:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:04.312 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:09:04.312 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:04.312 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:04.312 [ 0]:0x2 00:09:04.312 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:04.312 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:04.312 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b998931afc4a4928954a6d0171a4c444 00:09:04.312 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b998931afc4a4928954a6d0171a4c444 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:04.312 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:04.569 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:09:04.569 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:04.569 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:04.569 [ 0]:0x1 00:09:04.569 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:04.569 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:04.569 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7af7c6d1ff0849d895d0efaf3d2c21ae 00:09:04.569 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7af7c6d1ff0849d895d0efaf3d2c21ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:04.569 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:09:04.569 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:04.569 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:04.569 [ 1]:0x2 00:09:04.569 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:04.569 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:04.569 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b998931afc4a4928954a6d0171a4c444 00:09:04.569 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b998931afc4a4928954a6d0171a4c444 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:04.569 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:04.827 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:09:04.827 11:13:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:04.827 11:13:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:04.827 11:13:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:04.827 11:13:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:04.827 11:13:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:04.827 11:13:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:04.827 11:13:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:04.827 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:04.827 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:04.827 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:04.827 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:04.827 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:04.827 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:04.827 11:13:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:04.827 11:13:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:04.827 11:13:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:04.827 11:13:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:04.827 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:09:04.827 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:04.827 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:05.084 [ 0]:0x2 00:09:05.084 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:05.084 11:13:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:05.084 11:13:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b998931afc4a4928954a6d0171a4c444 00:09:05.084 11:13:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b998931afc4a4928954a6d0171a4c444 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:05.084 11:13:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:09:05.084 11:13:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:05.084 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.084 11:13:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:05.341 11:13:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:09:05.341 11:13:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 668762ce-c178-4e7b-b936-ae93938b65b1 -a 10.0.0.2 -s 4420 -i 4 00:09:05.598 11:13:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:05.598 11:13:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:05.598 11:13:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:05.598 11:13:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:09:05.598 11:13:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:09:05.598 11:13:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:07.494 11:13:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:07.494 11:13:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:07.494 11:13:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:07.494 11:13:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:09:07.494 11:13:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:07.494 11:13:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:07.494 11:13:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:07.494 11:13:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:07.782 11:13:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:07.782 11:13:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:07.782 11:13:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:09:07.782 11:13:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:07.782 11:13:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:07.782 [ 0]:0x1 00:09:07.782 11:13:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:07.782 11:13:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:07.782 11:13:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7af7c6d1ff0849d895d0efaf3d2c21ae 00:09:07.782 11:13:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7af7c6d1ff0849d895d0efaf3d2c21ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:07.782 11:13:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:09:07.782 11:13:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:07.782 11:13:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:07.782 [ 1]:0x2 00:09:07.782 11:13:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:07.782 11:13:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:08.064 11:13:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b998931afc4a4928954a6d0171a4c444 00:09:08.064 11:13:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b998931afc4a4928954a6d0171a4c444 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:08.064 11:13:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:08.064 11:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:09:08.064 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:08.064 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:08.064 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:08.064 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:08.064 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:08.064 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:08.064 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:08.064 11:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:08.064 11:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:08.064 11:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:08.065 11:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:08.065 11:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:08.065 11:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:08.065 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:08.065 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:08.065 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:08.065 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:08.065 11:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:09:08.065 11:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:08.065 11:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:08.065 [ 0]:0x2 00:09:08.065 11:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:08.065 11:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:08.322 11:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b998931afc4a4928954a6d0171a4c444 00:09:08.322 11:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b998931afc4a4928954a6d0171a4c444 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:08.322 11:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:08.322 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:08.322 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:08.322 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:08.322 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:08.322 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:08.322 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:08.322 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:08.322 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:08.322 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:08.322 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:08.322 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:08.322 [2024-07-12 11:13:34.454002] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:09:08.580 request: 00:09:08.580 { 00:09:08.580 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:08.580 "nsid": 2, 00:09:08.580 "host": "nqn.2016-06.io.spdk:host1", 00:09:08.580 "method": "nvmf_ns_remove_host", 00:09:08.580 "req_id": 1 00:09:08.580 } 00:09:08.580 Got JSON-RPC error response 00:09:08.580 response: 00:09:08.580 { 00:09:08.580 "code": -32602, 00:09:08.580 "message": "Invalid parameters" 00:09:08.580 } 00:09:08.580 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:08.580 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:08.580 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:08.580 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:08.580 11:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:09:08.580 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:08.580 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:08.580 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:08.580 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:08.580 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:08.580 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:08.580 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:08.580 11:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:08.580 11:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:08.580 11:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:08.580 11:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:08.580 11:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:08.580 11:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:08.580 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:08.580 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:08.580 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:08.580 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:08.580 11:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:09:08.580 11:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:08.580 11:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:08.580 [ 0]:0x2 00:09:08.580 11:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:08.580 11:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:08.580 11:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b998931afc4a4928954a6d0171a4c444 00:09:08.580 11:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b998931afc4a4928954a6d0171a4c444 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:08.580 11:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:09:08.580 11:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:08.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.837 11:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=513455 00:09:08.837 11:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:09:08.837 11:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:09:08.837 11:13:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 513455 /var/tmp/host.sock 00:09:08.837 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 513455 ']' 00:09:08.837 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:09:08.837 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:08.837 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:08.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:08.837 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:08.837 11:13:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:08.837 [2024-07-12 11:13:34.798067] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:09:08.837 [2024-07-12 11:13:34.798147] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid513455 ] 00:09:08.837 EAL: No free 2048 kB hugepages reported on node 1 00:09:08.837 [2024-07-12 11:13:34.858347] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.837 [2024-07-12 11:13:34.964236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.095 11:13:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:09.095 11:13:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:09.095 11:13:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:09.353 11:13:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:09.611 11:13:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid b1de359e-445e-43d3-bb1c-c3b12fab71dc 00:09:09.611 11:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:09.611 11:13:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g B1DE359E445E43D3BB1CC3B12FAB71DC -i 00:09:09.868 11:13:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 126ff996-15f0-466b-adb6-cd67783a5b2b 00:09:09.868 11:13:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:09.868 11:13:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 126FF99615F0466BADB6CD67783A5B2B -i 00:09:10.125 11:13:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:10.383 11:13:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:09:10.640 11:13:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:10.640 11:13:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:10.897 nvme0n1 00:09:10.897 11:13:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:10.897 11:13:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:11.462 nvme1n2 00:09:11.462 11:13:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:09:11.462 11:13:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:09:11.462 11:13:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:09:11.462 11:13:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:09:11.462 11:13:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:09:11.719 11:13:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:09:11.720 11:13:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:09:11.720 11:13:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:09:11.720 11:13:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:09:11.977 11:13:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ b1de359e-445e-43d3-bb1c-c3b12fab71dc == \b\1\d\e\3\5\9\e\-\4\4\5\e\-\4\3\d\3\-\b\b\1\c\-\c\3\b\1\2\f\a\b\7\1\d\c ]] 00:09:11.977 11:13:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:09:11.977 11:13:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:09:11.977 11:13:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:09:12.235 11:13:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 126ff996-15f0-466b-adb6-cd67783a5b2b == \1\2\6\f\f\9\9\6\-\1\5\f\0\-\4\6\6\b\-\a\d\b\6\-\c\d\6\7\7\8\3\a\5\b\2\b ]] 00:09:12.235 11:13:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 513455 00:09:12.235 11:13:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 513455 ']' 00:09:12.235 11:13:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 513455 00:09:12.235 11:13:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:12.235 11:13:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:12.235 11:13:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 513455 00:09:12.235 11:13:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:12.235 11:13:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:12.235 11:13:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 513455' 00:09:12.235 killing process with pid 513455 00:09:12.235 11:13:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 513455 00:09:12.235 11:13:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 513455 00:09:12.799 11:13:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:13.056 11:13:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:09:13.056 11:13:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:09:13.056 11:13:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:13.056 11:13:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:09:13.056 11:13:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:13.056 11:13:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:09:13.056 11:13:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:13.056 11:13:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:13.056 rmmod nvme_tcp 00:09:13.056 rmmod nvme_fabrics 00:09:13.056 rmmod nvme_keyring 00:09:13.056 11:13:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:13.056 11:13:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:09:13.056 11:13:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:09:13.056 11:13:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 511831 ']' 00:09:13.056 11:13:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 511831 00:09:13.056 11:13:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 511831 ']' 00:09:13.056 11:13:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 511831 00:09:13.056 11:13:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:13.056 11:13:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:13.056 11:13:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 511831 00:09:13.056 11:13:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:13.056 11:13:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:13.056 11:13:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 511831' 00:09:13.056 killing process with pid 511831 00:09:13.056 11:13:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 511831 00:09:13.056 11:13:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 511831 00:09:13.623 11:13:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:13.623 11:13:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:13.623 11:13:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:13.623 11:13:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:13.623 11:13:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:13.623 11:13:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.623 11:13:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:13.623 11:13:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.528 11:13:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:15.528 00:09:15.528 real 0m21.049s 00:09:15.528 user 0m27.105s 00:09:15.528 sys 0m4.173s 00:09:15.528 11:13:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:15.528 11:13:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:15.528 ************************************ 00:09:15.528 END TEST nvmf_ns_masking 00:09:15.528 ************************************ 00:09:15.528 11:13:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:15.528 11:13:41 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:09:15.528 11:13:41 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:15.528 11:13:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:15.528 11:13:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:15.528 11:13:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:15.528 ************************************ 00:09:15.528 START TEST nvmf_nvme_cli 00:09:15.528 ************************************ 00:09:15.528 11:13:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:15.528 * Looking for test storage... 00:09:15.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:15.528 11:13:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:15.528 11:13:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:09:15.528 11:13:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:15.528 11:13:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:15.528 11:13:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:15.528 11:13:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:15.528 11:13:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:15.528 11:13:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:15.528 11:13:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:15.528 11:13:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:15.528 11:13:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:15.528 11:13:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:15.528 11:13:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:15.528 11:13:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:15.528 11:13:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:15.528 11:13:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:15.528 11:13:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:15.528 11:13:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:15.528 11:13:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:15.528 11:13:41 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:15.528 11:13:41 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:15.528 11:13:41 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:15.528 11:13:41 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.528 11:13:41 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.528 11:13:41 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.528 11:13:41 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:09:15.528 11:13:41 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.528 11:13:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:09:15.528 11:13:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:15.529 11:13:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:15.529 11:13:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:15.529 11:13:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:15.529 11:13:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:15.529 11:13:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:15.529 11:13:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:15.529 11:13:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:15.529 11:13:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:15.529 11:13:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:15.529 11:13:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:09:15.529 11:13:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:09:15.529 11:13:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:15.529 11:13:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:15.529 11:13:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:15.529 11:13:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:15.529 11:13:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:15.529 11:13:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.529 11:13:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:15.529 11:13:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.529 11:13:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:15.529 11:13:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:15.529 11:13:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:09:15.529 11:13:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:18.057 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:18.057 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:18.057 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:18.057 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:18.057 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:18.058 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:18.058 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:18.058 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:18.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:18.058 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:09:18.058 00:09:18.058 --- 10.0.0.2 ping statistics --- 00:09:18.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.058 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:09:18.058 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:18.058 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:18.058 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:09:18.058 00:09:18.058 --- 10.0.0.1 ping statistics --- 00:09:18.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.058 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:09:18.058 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:18.058 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:09:18.058 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:18.058 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:18.058 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:18.058 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:18.058 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:18.058 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:18.058 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:18.058 11:13:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:09:18.058 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:18.058 11:13:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:18.058 11:13:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:18.058 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=515951 00:09:18.058 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:18.058 11:13:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 515951 00:09:18.058 11:13:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 515951 ']' 00:09:18.058 11:13:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.058 11:13:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:18.058 11:13:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.058 11:13:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:18.058 11:13:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:18.058 [2024-07-12 11:13:43.876963] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:09:18.058 [2024-07-12 11:13:43.877051] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:18.058 EAL: No free 2048 kB hugepages reported on node 1 00:09:18.058 [2024-07-12 11:13:43.939731] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:18.058 [2024-07-12 11:13:44.042069] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:18.058 [2024-07-12 11:13:44.042121] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:18.058 [2024-07-12 11:13:44.042149] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:18.058 [2024-07-12 11:13:44.042160] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:18.058 [2024-07-12 11:13:44.042170] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:18.058 [2024-07-12 11:13:44.042236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:18.058 [2024-07-12 11:13:44.042300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:18.058 [2024-07-12 11:13:44.042363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:18.058 [2024-07-12 11:13:44.042366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.058 11:13:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:18.058 11:13:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:09:18.058 11:13:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:18.058 11:13:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:18.058 11:13:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:18.058 11:13:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:18.058 11:13:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:18.058 11:13:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.058 11:13:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:18.315 [2024-07-12 11:13:44.191599] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:18.315 11:13:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.315 11:13:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:18.315 11:13:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.315 11:13:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:18.315 Malloc0 00:09:18.315 11:13:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.315 11:13:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:18.315 11:13:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.315 11:13:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:18.315 Malloc1 00:09:18.315 11:13:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.315 11:13:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:09:18.315 11:13:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.315 11:13:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:18.315 11:13:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.315 11:13:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:18.315 11:13:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.315 11:13:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:18.315 11:13:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.315 11:13:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:18.315 11:13:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.315 11:13:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:18.315 11:13:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.315 11:13:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:18.315 11:13:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.315 11:13:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:18.316 [2024-07-12 11:13:44.272702] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:18.316 11:13:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.316 11:13:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:18.316 11:13:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.316 11:13:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:18.316 11:13:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.316 11:13:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:09:18.316 00:09:18.316 Discovery Log Number of Records 2, Generation counter 2 00:09:18.316 =====Discovery Log Entry 0====== 00:09:18.316 trtype: tcp 00:09:18.316 adrfam: ipv4 00:09:18.316 subtype: current discovery subsystem 00:09:18.316 treq: not required 00:09:18.316 portid: 0 00:09:18.316 trsvcid: 4420 00:09:18.316 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:18.316 traddr: 10.0.0.2 00:09:18.316 eflags: explicit discovery connections, duplicate discovery information 00:09:18.316 sectype: none 00:09:18.316 =====Discovery Log Entry 1====== 00:09:18.316 trtype: tcp 00:09:18.316 adrfam: ipv4 00:09:18.316 subtype: nvme subsystem 00:09:18.316 treq: not required 00:09:18.316 portid: 0 00:09:18.316 trsvcid: 4420 00:09:18.316 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:18.316 traddr: 10.0.0.2 00:09:18.316 eflags: none 00:09:18.316 sectype: none 00:09:18.316 11:13:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:09:18.316 11:13:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:09:18.316 11:13:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:18.316 11:13:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:18.316 11:13:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:18.316 11:13:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:18.316 11:13:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:18.316 11:13:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:18.316 11:13:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:18.316 11:13:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:09:18.316 11:13:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:19.246 11:13:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:19.246 11:13:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:09:19.246 11:13:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:19.246 11:13:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:09:19.246 11:13:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:09:19.246 11:13:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:09:21.141 /dev/nvme0n1 ]] 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:21.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:21.141 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:21.141 rmmod nvme_tcp 00:09:21.141 rmmod nvme_fabrics 00:09:21.141 rmmod nvme_keyring 00:09:21.399 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:21.399 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:09:21.399 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:09:21.399 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 515951 ']' 00:09:21.399 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 515951 00:09:21.399 11:13:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 515951 ']' 00:09:21.399 11:13:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 515951 00:09:21.399 11:13:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:09:21.399 11:13:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:21.399 11:13:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 515951 00:09:21.399 11:13:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:21.399 11:13:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:21.399 11:13:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 515951' 00:09:21.399 killing process with pid 515951 00:09:21.399 11:13:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 515951 00:09:21.399 11:13:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 515951 00:09:21.658 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:21.658 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:21.658 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:21.658 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:21.658 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:21.658 11:13:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.658 11:13:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:21.658 11:13:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:23.563 11:13:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:23.563 00:09:23.563 real 0m8.137s 00:09:23.563 user 0m14.713s 00:09:23.563 sys 0m2.200s 00:09:23.563 11:13:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:23.821 11:13:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:23.821 ************************************ 00:09:23.821 END TEST nvmf_nvme_cli 00:09:23.821 ************************************ 00:09:23.821 11:13:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:23.821 11:13:49 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:09:23.821 11:13:49 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:23.821 11:13:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:23.821 11:13:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:23.821 11:13:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:23.821 ************************************ 00:09:23.821 START TEST nvmf_vfio_user 00:09:23.821 ************************************ 00:09:23.821 11:13:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:23.821 * Looking for test storage... 00:09:23.821 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:23.821 11:13:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:23.821 11:13:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:09:23.821 11:13:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:23.821 11:13:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:23.821 11:13:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:23.821 11:13:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:23.821 11:13:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:23.821 11:13:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:23.821 11:13:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:23.821 11:13:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:23.821 11:13:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:23.821 11:13:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:23.821 11:13:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:23.821 11:13:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:23.821 11:13:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:23.821 11:13:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:23.821 11:13:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:23.821 11:13:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:23.821 11:13:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:23.821 11:13:49 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:23.821 11:13:49 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:23.821 11:13:49 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:23.821 11:13:49 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.822 11:13:49 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.822 11:13:49 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.822 11:13:49 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:09:23.822 11:13:49 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.822 11:13:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:09:23.822 11:13:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:23.822 11:13:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:23.822 11:13:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:23.822 11:13:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:23.822 11:13:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:23.822 11:13:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:23.822 11:13:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:23.822 11:13:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:23.822 11:13:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:09:23.822 11:13:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:23.822 11:13:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:09:23.822 11:13:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:23.822 11:13:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:09:23.822 11:13:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:09:23.822 11:13:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:09:23.822 11:13:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:09:23.822 11:13:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:09:23.822 11:13:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:09:23.822 11:13:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=516755 00:09:23.822 11:13:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:09:23.822 11:13:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 516755' 00:09:23.822 Process pid: 516755 00:09:23.822 11:13:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:09:23.822 11:13:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 516755 00:09:23.822 11:13:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 516755 ']' 00:09:23.822 11:13:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.822 11:13:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:23.822 11:13:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.822 11:13:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:23.822 11:13:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:09:23.822 [2024-07-12 11:13:49.858645] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:09:23.822 [2024-07-12 11:13:49.858726] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:23.822 EAL: No free 2048 kB hugepages reported on node 1 00:09:23.822 [2024-07-12 11:13:49.917089] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:24.079 [2024-07-12 11:13:50.028555] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:24.079 [2024-07-12 11:13:50.028614] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:24.079 [2024-07-12 11:13:50.028650] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:24.079 [2024-07-12 11:13:50.028662] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:24.079 [2024-07-12 11:13:50.028672] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:24.079 [2024-07-12 11:13:50.028759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.079 [2024-07-12 11:13:50.028820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:24.079 [2024-07-12 11:13:50.028953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:24.079 [2024-07-12 11:13:50.028957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.079 11:13:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:24.079 11:13:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:09:24.079 11:13:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:09:25.449 11:13:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:09:25.449 11:13:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:09:25.449 11:13:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:09:25.449 11:13:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:25.449 11:13:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:09:25.449 11:13:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:25.708 Malloc1 00:09:25.708 11:13:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:09:25.965 11:13:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:09:26.222 11:13:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:09:26.479 11:13:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:26.479 11:13:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:09:26.479 11:13:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:26.737 Malloc2 00:09:26.737 11:13:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:09:26.993 11:13:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:09:27.251 11:13:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:09:27.508 11:13:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:09:27.508 11:13:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:09:27.508 11:13:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:27.508 11:13:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:09:27.508 11:13:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:09:27.508 11:13:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:09:27.508 [2024-07-12 11:13:53.613043] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:09:27.508 [2024-07-12 11:13:53.613084] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid517298 ] 00:09:27.508 EAL: No free 2048 kB hugepages reported on node 1 00:09:27.766 [2024-07-12 11:13:53.645233] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:09:27.766 [2024-07-12 11:13:53.654274] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:27.766 [2024-07-12 11:13:53.654302] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f64b8d93000 00:09:27.766 [2024-07-12 11:13:53.655268] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:27.766 [2024-07-12 11:13:53.656267] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:27.766 [2024-07-12 11:13:53.657267] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:27.766 [2024-07-12 11:13:53.658278] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:27.766 [2024-07-12 11:13:53.659282] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:27.766 [2024-07-12 11:13:53.660286] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:27.766 [2024-07-12 11:13:53.661292] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:27.766 [2024-07-12 11:13:53.662296] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:27.766 [2024-07-12 11:13:53.663305] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:27.766 [2024-07-12 11:13:53.663324] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f64b8d88000 00:09:27.766 [2024-07-12 11:13:53.664439] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:27.766 [2024-07-12 11:13:53.680045] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:09:27.766 [2024-07-12 11:13:53.680080] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:09:27.766 [2024-07-12 11:13:53.682417] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:27.766 [2024-07-12 11:13:53.682472] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:09:27.766 [2024-07-12 11:13:53.682561] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:09:27.766 [2024-07-12 11:13:53.682588] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:09:27.767 [2024-07-12 11:13:53.682598] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:09:27.767 [2024-07-12 11:13:53.683408] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:09:27.767 [2024-07-12 11:13:53.683428] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:09:27.767 [2024-07-12 11:13:53.683447] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:09:27.767 [2024-07-12 11:13:53.684412] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:27.767 [2024-07-12 11:13:53.684432] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:09:27.767 [2024-07-12 11:13:53.684445] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:09:27.767 [2024-07-12 11:13:53.685412] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:09:27.767 [2024-07-12 11:13:53.685430] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:09:27.767 [2024-07-12 11:13:53.686420] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:09:27.767 [2024-07-12 11:13:53.686439] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:09:27.767 [2024-07-12 11:13:53.686448] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:09:27.767 [2024-07-12 11:13:53.686459] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:09:27.767 [2024-07-12 11:13:53.686568] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:09:27.767 [2024-07-12 11:13:53.686575] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:09:27.767 [2024-07-12 11:13:53.686583] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:09:27.767 [2024-07-12 11:13:53.687424] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:09:27.767 [2024-07-12 11:13:53.688425] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:09:27.767 [2024-07-12 11:13:53.689428] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:27.767 [2024-07-12 11:13:53.690426] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:27.767 [2024-07-12 11:13:53.690526] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:09:27.767 [2024-07-12 11:13:53.692948] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:09:27.767 [2024-07-12 11:13:53.692968] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:09:27.767 [2024-07-12 11:13:53.692977] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:09:27.767 [2024-07-12 11:13:53.693002] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:09:27.767 [2024-07-12 11:13:53.693015] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:09:27.767 [2024-07-12 11:13:53.693039] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:27.767 [2024-07-12 11:13:53.693052] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:27.767 [2024-07-12 11:13:53.693071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:27.767 [2024-07-12 11:13:53.693134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:09:27.767 [2024-07-12 11:13:53.693165] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:09:27.767 [2024-07-12 11:13:53.693177] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:09:27.767 [2024-07-12 11:13:53.693184] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:09:27.767 [2024-07-12 11:13:53.693192] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:09:27.767 [2024-07-12 11:13:53.693199] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:09:27.767 [2024-07-12 11:13:53.693207] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:09:27.767 [2024-07-12 11:13:53.693214] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:09:27.767 [2024-07-12 11:13:53.693227] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:09:27.767 [2024-07-12 11:13:53.693242] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:09:27.767 [2024-07-12 11:13:53.693256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:09:27.767 [2024-07-12 11:13:53.693277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:09:27.767 [2024-07-12 11:13:53.693290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:09:27.767 [2024-07-12 11:13:53.693301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:09:27.767 [2024-07-12 11:13:53.693313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:09:27.767 [2024-07-12 11:13:53.693321] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:09:27.767 [2024-07-12 11:13:53.693337] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:09:27.767 [2024-07-12 11:13:53.693351] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:09:27.767 [2024-07-12 11:13:53.693363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:09:27.767 [2024-07-12 11:13:53.693373] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:09:27.767 [2024-07-12 11:13:53.693381] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:09:27.767 [2024-07-12 11:13:53.693391] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:09:27.767 [2024-07-12 11:13:53.693400] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:09:27.767 [2024-07-12 11:13:53.693412] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:27.767 [2024-07-12 11:13:53.693428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:09:27.767 [2024-07-12 11:13:53.693488] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:09:27.767 [2024-07-12 11:13:53.693503] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:09:27.767 [2024-07-12 11:13:53.693516] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:09:27.767 [2024-07-12 11:13:53.693524] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:09:27.767 [2024-07-12 11:13:53.693534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:09:27.767 [2024-07-12 11:13:53.693550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:09:27.767 [2024-07-12 11:13:53.693566] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:09:27.767 [2024-07-12 11:13:53.693584] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:09:27.767 [2024-07-12 11:13:53.693598] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:09:27.767 [2024-07-12 11:13:53.693610] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:27.767 [2024-07-12 11:13:53.693618] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:27.767 [2024-07-12 11:13:53.693627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:27.767 [2024-07-12 11:13:53.693652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:09:27.767 [2024-07-12 11:13:53.693672] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:09:27.767 [2024-07-12 11:13:53.693686] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:09:27.767 [2024-07-12 11:13:53.693698] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:27.767 [2024-07-12 11:13:53.693706] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:27.767 [2024-07-12 11:13:53.693716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:27.767 [2024-07-12 11:13:53.693734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:09:27.767 [2024-07-12 11:13:53.693747] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:09:27.767 [2024-07-12 11:13:53.693758] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:09:27.767 [2024-07-12 11:13:53.693771] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:09:27.767 [2024-07-12 11:13:53.693781] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:09:27.767 [2024-07-12 11:13:53.693789] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:09:27.767 [2024-07-12 11:13:53.693800] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:09:27.767 [2024-07-12 11:13:53.693809] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:09:27.767 [2024-07-12 11:13:53.693816] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:09:27.767 [2024-07-12 11:13:53.693824] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:09:27.767 [2024-07-12 11:13:53.693848] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:09:27.767 [2024-07-12 11:13:53.693886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:09:27.767 [2024-07-12 11:13:53.693909] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:09:27.768 [2024-07-12 11:13:53.693922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:09:27.768 [2024-07-12 11:13:53.693939] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:09:27.768 [2024-07-12 11:13:53.693951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:09:27.768 [2024-07-12 11:13:53.693967] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:27.768 [2024-07-12 11:13:53.693979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:09:27.768 [2024-07-12 11:13:53.694002] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:09:27.768 [2024-07-12 11:13:53.694012] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:09:27.768 [2024-07-12 11:13:53.694019] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:09:27.768 [2024-07-12 11:13:53.694025] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:09:27.768 [2024-07-12 11:13:53.694035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:09:27.768 [2024-07-12 11:13:53.694047] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:09:27.768 [2024-07-12 11:13:53.694055] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:09:27.768 [2024-07-12 11:13:53.694064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:09:27.768 [2024-07-12 11:13:53.694075] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:09:27.768 [2024-07-12 11:13:53.694083] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:27.768 [2024-07-12 11:13:53.694091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:27.768 [2024-07-12 11:13:53.694103] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:09:27.768 [2024-07-12 11:13:53.694111] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:09:27.768 [2024-07-12 11:13:53.694120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:09:27.768 [2024-07-12 11:13:53.694132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:09:27.768 [2024-07-12 11:13:53.694155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:09:27.768 [2024-07-12 11:13:53.694173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:09:27.768 [2024-07-12 11:13:53.694200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:09:27.768 ===================================================== 00:09:27.768 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:27.768 ===================================================== 00:09:27.768 Controller Capabilities/Features 00:09:27.768 ================================ 00:09:27.768 Vendor ID: 4e58 00:09:27.768 Subsystem Vendor ID: 4e58 00:09:27.768 Serial Number: SPDK1 00:09:27.768 Model Number: SPDK bdev Controller 00:09:27.768 Firmware Version: 24.09 00:09:27.768 Recommended Arb Burst: 6 00:09:27.768 IEEE OUI Identifier: 8d 6b 50 00:09:27.768 Multi-path I/O 00:09:27.768 May have multiple subsystem ports: Yes 00:09:27.768 May have multiple controllers: Yes 00:09:27.768 Associated with SR-IOV VF: No 00:09:27.768 Max Data Transfer Size: 131072 00:09:27.768 Max Number of Namespaces: 32 00:09:27.768 Max Number of I/O Queues: 127 00:09:27.768 NVMe Specification Version (VS): 1.3 00:09:27.768 NVMe Specification Version (Identify): 1.3 00:09:27.768 Maximum Queue Entries: 256 00:09:27.768 Contiguous Queues Required: Yes 00:09:27.768 Arbitration Mechanisms Supported 00:09:27.768 Weighted Round Robin: Not Supported 00:09:27.768 Vendor Specific: Not Supported 00:09:27.768 Reset Timeout: 15000 ms 00:09:27.768 Doorbell Stride: 4 bytes 00:09:27.768 NVM Subsystem Reset: Not Supported 00:09:27.768 Command Sets Supported 00:09:27.768 NVM Command Set: Supported 00:09:27.768 Boot Partition: Not Supported 00:09:27.768 Memory Page Size Minimum: 4096 bytes 00:09:27.768 Memory Page Size Maximum: 4096 bytes 00:09:27.768 Persistent Memory Region: Not Supported 00:09:27.768 Optional Asynchronous Events Supported 00:09:27.768 Namespace Attribute Notices: Supported 00:09:27.768 Firmware Activation Notices: Not Supported 00:09:27.768 ANA Change Notices: Not Supported 00:09:27.768 PLE Aggregate Log Change Notices: Not Supported 00:09:27.768 LBA Status Info Alert Notices: Not Supported 00:09:27.768 EGE Aggregate Log Change Notices: Not Supported 00:09:27.768 Normal NVM Subsystem Shutdown event: Not Supported 00:09:27.768 Zone Descriptor Change Notices: Not Supported 00:09:27.768 Discovery Log Change Notices: Not Supported 00:09:27.768 Controller Attributes 00:09:27.768 128-bit Host Identifier: Supported 00:09:27.768 Non-Operational Permissive Mode: Not Supported 00:09:27.768 NVM Sets: Not Supported 00:09:27.768 Read Recovery Levels: Not Supported 00:09:27.768 Endurance Groups: Not Supported 00:09:27.768 Predictable Latency Mode: Not Supported 00:09:27.768 Traffic Based Keep ALive: Not Supported 00:09:27.768 Namespace Granularity: Not Supported 00:09:27.768 SQ Associations: Not Supported 00:09:27.768 UUID List: Not Supported 00:09:27.768 Multi-Domain Subsystem: Not Supported 00:09:27.768 Fixed Capacity Management: Not Supported 00:09:27.768 Variable Capacity Management: Not Supported 00:09:27.768 Delete Endurance Group: Not Supported 00:09:27.768 Delete NVM Set: Not Supported 00:09:27.768 Extended LBA Formats Supported: Not Supported 00:09:27.768 Flexible Data Placement Supported: Not Supported 00:09:27.768 00:09:27.768 Controller Memory Buffer Support 00:09:27.768 ================================ 00:09:27.768 Supported: No 00:09:27.768 00:09:27.768 Persistent Memory Region Support 00:09:27.768 ================================ 00:09:27.768 Supported: No 00:09:27.768 00:09:27.768 Admin Command Set Attributes 00:09:27.768 ============================ 00:09:27.768 Security Send/Receive: Not Supported 00:09:27.768 Format NVM: Not Supported 00:09:27.768 Firmware Activate/Download: Not Supported 00:09:27.768 Namespace Management: Not Supported 00:09:27.768 Device Self-Test: Not Supported 00:09:27.768 Directives: Not Supported 00:09:27.768 NVMe-MI: Not Supported 00:09:27.768 Virtualization Management: Not Supported 00:09:27.768 Doorbell Buffer Config: Not Supported 00:09:27.768 Get LBA Status Capability: Not Supported 00:09:27.768 Command & Feature Lockdown Capability: Not Supported 00:09:27.768 Abort Command Limit: 4 00:09:27.768 Async Event Request Limit: 4 00:09:27.768 Number of Firmware Slots: N/A 00:09:27.768 Firmware Slot 1 Read-Only: N/A 00:09:27.768 Firmware Activation Without Reset: N/A 00:09:27.768 Multiple Update Detection Support: N/A 00:09:27.768 Firmware Update Granularity: No Information Provided 00:09:27.768 Per-Namespace SMART Log: No 00:09:27.768 Asymmetric Namespace Access Log Page: Not Supported 00:09:27.768 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:09:27.768 Command Effects Log Page: Supported 00:09:27.768 Get Log Page Extended Data: Supported 00:09:27.768 Telemetry Log Pages: Not Supported 00:09:27.768 Persistent Event Log Pages: Not Supported 00:09:27.768 Supported Log Pages Log Page: May Support 00:09:27.768 Commands Supported & Effects Log Page: Not Supported 00:09:27.768 Feature Identifiers & Effects Log Page:May Support 00:09:27.768 NVMe-MI Commands & Effects Log Page: May Support 00:09:27.768 Data Area 4 for Telemetry Log: Not Supported 00:09:27.768 Error Log Page Entries Supported: 128 00:09:27.768 Keep Alive: Supported 00:09:27.768 Keep Alive Granularity: 10000 ms 00:09:27.768 00:09:27.768 NVM Command Set Attributes 00:09:27.768 ========================== 00:09:27.768 Submission Queue Entry Size 00:09:27.768 Max: 64 00:09:27.768 Min: 64 00:09:27.768 Completion Queue Entry Size 00:09:27.768 Max: 16 00:09:27.768 Min: 16 00:09:27.768 Number of Namespaces: 32 00:09:27.768 Compare Command: Supported 00:09:27.768 Write Uncorrectable Command: Not Supported 00:09:27.768 Dataset Management Command: Supported 00:09:27.768 Write Zeroes Command: Supported 00:09:27.768 Set Features Save Field: Not Supported 00:09:27.768 Reservations: Not Supported 00:09:27.768 Timestamp: Not Supported 00:09:27.768 Copy: Supported 00:09:27.768 Volatile Write Cache: Present 00:09:27.768 Atomic Write Unit (Normal): 1 00:09:27.768 Atomic Write Unit (PFail): 1 00:09:27.768 Atomic Compare & Write Unit: 1 00:09:27.768 Fused Compare & Write: Supported 00:09:27.768 Scatter-Gather List 00:09:27.768 SGL Command Set: Supported (Dword aligned) 00:09:27.768 SGL Keyed: Not Supported 00:09:27.768 SGL Bit Bucket Descriptor: Not Supported 00:09:27.768 SGL Metadata Pointer: Not Supported 00:09:27.768 Oversized SGL: Not Supported 00:09:27.768 SGL Metadata Address: Not Supported 00:09:27.768 SGL Offset: Not Supported 00:09:27.768 Transport SGL Data Block: Not Supported 00:09:27.768 Replay Protected Memory Block: Not Supported 00:09:27.768 00:09:27.768 Firmware Slot Information 00:09:27.768 ========================= 00:09:27.768 Active slot: 1 00:09:27.768 Slot 1 Firmware Revision: 24.09 00:09:27.768 00:09:27.768 00:09:27.768 Commands Supported and Effects 00:09:27.768 ============================== 00:09:27.768 Admin Commands 00:09:27.768 -------------- 00:09:27.768 Get Log Page (02h): Supported 00:09:27.768 Identify (06h): Supported 00:09:27.768 Abort (08h): Supported 00:09:27.768 Set Features (09h): Supported 00:09:27.768 Get Features (0Ah): Supported 00:09:27.768 Asynchronous Event Request (0Ch): Supported 00:09:27.768 Keep Alive (18h): Supported 00:09:27.768 I/O Commands 00:09:27.769 ------------ 00:09:27.769 Flush (00h): Supported LBA-Change 00:09:27.769 Write (01h): Supported LBA-Change 00:09:27.769 Read (02h): Supported 00:09:27.769 Compare (05h): Supported 00:09:27.769 Write Zeroes (08h): Supported LBA-Change 00:09:27.769 Dataset Management (09h): Supported LBA-Change 00:09:27.769 Copy (19h): Supported LBA-Change 00:09:27.769 00:09:27.769 Error Log 00:09:27.769 ========= 00:09:27.769 00:09:27.769 Arbitration 00:09:27.769 =========== 00:09:27.769 Arbitration Burst: 1 00:09:27.769 00:09:27.769 Power Management 00:09:27.769 ================ 00:09:27.769 Number of Power States: 1 00:09:27.769 Current Power State: Power State #0 00:09:27.769 Power State #0: 00:09:27.769 Max Power: 0.00 W 00:09:27.769 Non-Operational State: Operational 00:09:27.769 Entry Latency: Not Reported 00:09:27.769 Exit Latency: Not Reported 00:09:27.769 Relative Read Throughput: 0 00:09:27.769 Relative Read Latency: 0 00:09:27.769 Relative Write Throughput: 0 00:09:27.769 Relative Write Latency: 0 00:09:27.769 Idle Power: Not Reported 00:09:27.769 Active Power: Not Reported 00:09:27.769 Non-Operational Permissive Mode: Not Supported 00:09:27.769 00:09:27.769 Health Information 00:09:27.769 ================== 00:09:27.769 Critical Warnings: 00:09:27.769 Available Spare Space: OK 00:09:27.769 Temperature: OK 00:09:27.769 Device Reliability: OK 00:09:27.769 Read Only: No 00:09:27.769 Volatile Memory Backup: OK 00:09:27.769 Current Temperature: 0 Kelvin (-273 Celsius) 00:09:27.769 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:09:27.769 Available Spare: 0% 00:09:27.769 Available Sp[2024-07-12 11:13:53.694315] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:09:27.769 [2024-07-12 11:13:53.694331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:09:27.769 [2024-07-12 11:13:53.694372] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:09:27.769 [2024-07-12 11:13:53.694389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:27.769 [2024-07-12 11:13:53.694400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:27.769 [2024-07-12 11:13:53.694411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:27.769 [2024-07-12 11:13:53.694420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:27.769 [2024-07-12 11:13:53.695876] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:27.769 [2024-07-12 11:13:53.695897] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:09:27.769 [2024-07-12 11:13:53.696465] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:27.769 [2024-07-12 11:13:53.696541] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:09:27.769 [2024-07-12 11:13:53.696555] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:09:27.769 [2024-07-12 11:13:53.697480] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:09:27.769 [2024-07-12 11:13:53.697503] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:09:27.769 [2024-07-12 11:13:53.697556] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:09:27.769 [2024-07-12 11:13:53.699518] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:27.769 are Threshold: 0% 00:09:27.769 Life Percentage Used: 0% 00:09:27.769 Data Units Read: 0 00:09:27.769 Data Units Written: 0 00:09:27.769 Host Read Commands: 0 00:09:27.769 Host Write Commands: 0 00:09:27.769 Controller Busy Time: 0 minutes 00:09:27.769 Power Cycles: 0 00:09:27.769 Power On Hours: 0 hours 00:09:27.769 Unsafe Shutdowns: 0 00:09:27.769 Unrecoverable Media Errors: 0 00:09:27.769 Lifetime Error Log Entries: 0 00:09:27.769 Warning Temperature Time: 0 minutes 00:09:27.769 Critical Temperature Time: 0 minutes 00:09:27.769 00:09:27.769 Number of Queues 00:09:27.769 ================ 00:09:27.769 Number of I/O Submission Queues: 127 00:09:27.769 Number of I/O Completion Queues: 127 00:09:27.769 00:09:27.769 Active Namespaces 00:09:27.769 ================= 00:09:27.769 Namespace ID:1 00:09:27.769 Error Recovery Timeout: Unlimited 00:09:27.769 Command Set Identifier: NVM (00h) 00:09:27.769 Deallocate: Supported 00:09:27.769 Deallocated/Unwritten Error: Not Supported 00:09:27.769 Deallocated Read Value: Unknown 00:09:27.769 Deallocate in Write Zeroes: Not Supported 00:09:27.769 Deallocated Guard Field: 0xFFFF 00:09:27.769 Flush: Supported 00:09:27.769 Reservation: Supported 00:09:27.769 Namespace Sharing Capabilities: Multiple Controllers 00:09:27.769 Size (in LBAs): 131072 (0GiB) 00:09:27.769 Capacity (in LBAs): 131072 (0GiB) 00:09:27.769 Utilization (in LBAs): 131072 (0GiB) 00:09:27.769 NGUID: D038460546ED4E4392AD874A09155914 00:09:27.769 UUID: d0384605-46ed-4e43-92ad-874a09155914 00:09:27.769 Thin Provisioning: Not Supported 00:09:27.769 Per-NS Atomic Units: Yes 00:09:27.769 Atomic Boundary Size (Normal): 0 00:09:27.769 Atomic Boundary Size (PFail): 0 00:09:27.769 Atomic Boundary Offset: 0 00:09:27.769 Maximum Single Source Range Length: 65535 00:09:27.769 Maximum Copy Length: 65535 00:09:27.769 Maximum Source Range Count: 1 00:09:27.769 NGUID/EUI64 Never Reused: No 00:09:27.769 Namespace Write Protected: No 00:09:27.769 Number of LBA Formats: 1 00:09:27.769 Current LBA Format: LBA Format #00 00:09:27.769 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:27.769 00:09:27.769 11:13:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:09:27.769 EAL: No free 2048 kB hugepages reported on node 1 00:09:28.026 [2024-07-12 11:13:53.929700] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:33.285 Initializing NVMe Controllers 00:09:33.285 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:33.285 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:09:33.285 Initialization complete. Launching workers. 00:09:33.285 ======================================================== 00:09:33.285 Latency(us) 00:09:33.285 Device Information : IOPS MiB/s Average min max 00:09:33.285 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34658.06 135.38 3693.18 1178.37 7334.84 00:09:33.285 ======================================================== 00:09:33.285 Total : 34658.06 135.38 3693.18 1178.37 7334.84 00:09:33.285 00:09:33.285 [2024-07-12 11:13:58.952785] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:33.285 11:13:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:09:33.285 EAL: No free 2048 kB hugepages reported on node 1 00:09:33.285 [2024-07-12 11:13:59.186954] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:38.562 Initializing NVMe Controllers 00:09:38.562 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:38.562 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:09:38.562 Initialization complete. Launching workers. 00:09:38.562 ======================================================== 00:09:38.562 Latency(us) 00:09:38.562 Device Information : IOPS MiB/s Average min max 00:09:38.562 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16038.09 62.65 7986.27 6997.99 11972.15 00:09:38.562 ======================================================== 00:09:38.562 Total : 16038.09 62.65 7986.27 6997.99 11972.15 00:09:38.562 00:09:38.562 [2024-07-12 11:14:04.226747] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:38.562 11:14:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:09:38.562 EAL: No free 2048 kB hugepages reported on node 1 00:09:38.562 [2024-07-12 11:14:04.426705] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:43.820 [2024-07-12 11:14:09.491267] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:43.820 Initializing NVMe Controllers 00:09:43.820 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:43.820 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:43.820 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:09:43.820 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:09:43.820 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:09:43.820 Initialization complete. Launching workers. 00:09:43.820 Starting thread on core 2 00:09:43.820 Starting thread on core 3 00:09:43.820 Starting thread on core 1 00:09:43.820 11:14:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:09:43.820 EAL: No free 2048 kB hugepages reported on node 1 00:09:43.820 [2024-07-12 11:14:09.789362] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:47.104 [2024-07-12 11:14:12.853682] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:47.104 Initializing NVMe Controllers 00:09:47.104 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:47.104 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:47.104 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:09:47.104 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:09:47.104 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:09:47.104 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:09:47.104 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:09:47.104 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:09:47.104 Initialization complete. Launching workers. 00:09:47.104 Starting thread on core 1 with urgent priority queue 00:09:47.104 Starting thread on core 2 with urgent priority queue 00:09:47.104 Starting thread on core 3 with urgent priority queue 00:09:47.104 Starting thread on core 0 with urgent priority queue 00:09:47.104 SPDK bdev Controller (SPDK1 ) core 0: 5054.00 IO/s 19.79 secs/100000 ios 00:09:47.104 SPDK bdev Controller (SPDK1 ) core 1: 5549.00 IO/s 18.02 secs/100000 ios 00:09:47.104 SPDK bdev Controller (SPDK1 ) core 2: 5729.33 IO/s 17.45 secs/100000 ios 00:09:47.104 SPDK bdev Controller (SPDK1 ) core 3: 5518.33 IO/s 18.12 secs/100000 ios 00:09:47.104 ======================================================== 00:09:47.104 00:09:47.104 11:14:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:09:47.104 EAL: No free 2048 kB hugepages reported on node 1 00:09:47.104 [2024-07-12 11:14:13.160496] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:47.104 Initializing NVMe Controllers 00:09:47.104 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:47.104 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:47.104 Namespace ID: 1 size: 0GB 00:09:47.104 Initialization complete. 00:09:47.104 INFO: using host memory buffer for IO 00:09:47.104 Hello world! 00:09:47.104 [2024-07-12 11:14:13.202161] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:47.361 11:14:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:09:47.361 EAL: No free 2048 kB hugepages reported on node 1 00:09:47.618 [2024-07-12 11:14:13.502380] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:48.551 Initializing NVMe Controllers 00:09:48.551 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:48.551 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:48.551 Initialization complete. Launching workers. 00:09:48.551 submit (in ns) avg, min, max = 6357.5, 3512.2, 4999777.8 00:09:48.551 complete (in ns) avg, min, max = 28098.9, 2068.9, 6011713.3 00:09:48.551 00:09:48.551 Submit histogram 00:09:48.551 ================ 00:09:48.551 Range in us Cumulative Count 00:09:48.551 3.508 - 3.532: 0.1768% ( 24) 00:09:48.551 3.532 - 3.556: 0.7219% ( 74) 00:09:48.551 3.556 - 3.579: 2.7403% ( 274) 00:09:48.551 3.579 - 3.603: 6.5783% ( 521) 00:09:48.551 3.603 - 3.627: 13.8269% ( 984) 00:09:48.551 3.627 - 3.650: 22.6225% ( 1194) 00:09:48.551 3.650 - 3.674: 32.3536% ( 1321) 00:09:48.551 3.674 - 3.698: 40.3389% ( 1084) 00:09:48.551 3.698 - 3.721: 47.1013% ( 918) 00:09:48.551 3.721 - 3.745: 51.7937% ( 637) 00:09:48.551 3.745 - 3.769: 56.3683% ( 621) 00:09:48.551 3.769 - 3.793: 60.5967% ( 574) 00:09:48.551 3.793 - 3.816: 64.2357% ( 494) 00:09:48.551 3.816 - 3.840: 67.8379% ( 489) 00:09:48.551 3.840 - 3.864: 71.9632% ( 560) 00:09:48.551 3.864 - 3.887: 76.3904% ( 601) 00:09:48.551 3.887 - 3.911: 80.1915% ( 516) 00:09:48.551 3.911 - 3.935: 83.5948% ( 462) 00:09:48.551 3.935 - 3.959: 85.7606% ( 294) 00:09:48.551 3.959 - 3.982: 87.6906% ( 262) 00:09:48.551 3.982 - 4.006: 89.3481% ( 225) 00:09:48.551 4.006 - 4.030: 90.7698% ( 193) 00:09:48.551 4.030 - 4.053: 91.7495% ( 133) 00:09:48.551 4.053 - 4.077: 92.6188% ( 118) 00:09:48.551 4.077 - 4.101: 93.5985% ( 133) 00:09:48.551 4.101 - 4.124: 94.3720% ( 105) 00:09:48.551 4.124 - 4.148: 94.9613% ( 80) 00:09:48.551 4.148 - 4.172: 95.3223% ( 49) 00:09:48.551 4.172 - 4.196: 95.8011% ( 65) 00:09:48.551 4.196 - 4.219: 96.0589% ( 35) 00:09:48.551 4.219 - 4.243: 96.2726% ( 29) 00:09:48.551 4.243 - 4.267: 96.4052% ( 18) 00:09:48.551 4.267 - 4.290: 96.5304% ( 17) 00:09:48.551 4.290 - 4.314: 96.6188% ( 12) 00:09:48.551 4.314 - 4.338: 96.7072% ( 12) 00:09:48.551 4.338 - 4.361: 96.8177% ( 15) 00:09:48.551 4.361 - 4.385: 96.8913% ( 10) 00:09:48.551 4.385 - 4.409: 96.9282% ( 5) 00:09:48.551 4.409 - 4.433: 96.9429% ( 2) 00:09:48.551 4.433 - 4.456: 96.9724% ( 4) 00:09:48.551 4.456 - 4.480: 97.0092% ( 5) 00:09:48.551 4.480 - 4.504: 97.0681% ( 8) 00:09:48.551 4.504 - 4.527: 97.1050% ( 5) 00:09:48.551 4.527 - 4.551: 97.1271% ( 3) 00:09:48.551 4.551 - 4.575: 97.1418% ( 2) 00:09:48.551 4.575 - 4.599: 97.1492% ( 1) 00:09:48.551 4.599 - 4.622: 97.1786% ( 4) 00:09:48.551 4.622 - 4.646: 97.2155% ( 5) 00:09:48.551 4.646 - 4.670: 97.2523% ( 5) 00:09:48.551 4.670 - 4.693: 97.2818% ( 4) 00:09:48.551 4.693 - 4.717: 97.3554% ( 10) 00:09:48.551 4.717 - 4.741: 97.3923% ( 5) 00:09:48.551 4.741 - 4.764: 97.4438% ( 7) 00:09:48.551 4.764 - 4.788: 97.4954% ( 7) 00:09:48.551 4.788 - 4.812: 97.5470% ( 7) 00:09:48.551 4.812 - 4.836: 97.6133% ( 9) 00:09:48.551 4.836 - 4.859: 97.6648% ( 7) 00:09:48.551 4.859 - 4.883: 97.7017% ( 5) 00:09:48.551 4.883 - 4.907: 97.7459% ( 6) 00:09:48.551 4.907 - 4.930: 97.8048% ( 8) 00:09:48.551 4.930 - 4.954: 97.8195% ( 2) 00:09:48.551 4.954 - 4.978: 97.8711% ( 7) 00:09:48.551 4.978 - 5.001: 97.8932% ( 3) 00:09:48.551 5.001 - 5.025: 97.9448% ( 7) 00:09:48.551 5.025 - 5.049: 97.9816% ( 5) 00:09:48.551 5.049 - 5.073: 97.9963% ( 2) 00:09:48.551 5.073 - 5.096: 98.0184% ( 3) 00:09:48.551 5.096 - 5.120: 98.0479% ( 4) 00:09:48.551 5.120 - 5.144: 98.0773% ( 4) 00:09:48.551 5.144 - 5.167: 98.0994% ( 3) 00:09:48.551 5.167 - 5.191: 98.1142% ( 2) 00:09:48.551 5.191 - 5.215: 98.1289% ( 2) 00:09:48.551 5.215 - 5.239: 98.1436% ( 2) 00:09:48.551 5.239 - 5.262: 98.1510% ( 1) 00:09:48.551 5.262 - 5.286: 98.1584% ( 1) 00:09:48.551 5.286 - 5.310: 98.1657% ( 1) 00:09:48.551 5.310 - 5.333: 98.1731% ( 1) 00:09:48.551 5.333 - 5.357: 98.1805% ( 1) 00:09:48.551 5.357 - 5.381: 98.2099% ( 4) 00:09:48.551 5.428 - 5.452: 98.2320% ( 3) 00:09:48.551 5.452 - 5.476: 98.2394% ( 1) 00:09:48.551 5.476 - 5.499: 98.2468% ( 1) 00:09:48.551 5.499 - 5.523: 98.2541% ( 1) 00:09:48.551 5.523 - 5.547: 98.2615% ( 1) 00:09:48.551 5.570 - 5.594: 98.2762% ( 2) 00:09:48.551 5.594 - 5.618: 98.2836% ( 1) 00:09:48.551 5.618 - 5.641: 98.2910% ( 1) 00:09:48.551 5.665 - 5.689: 98.3057% ( 2) 00:09:48.551 5.736 - 5.760: 98.3131% ( 1) 00:09:48.551 5.760 - 5.784: 98.3204% ( 1) 00:09:48.551 5.784 - 5.807: 98.3278% ( 1) 00:09:48.551 5.831 - 5.855: 98.3352% ( 1) 00:09:48.551 5.855 - 5.879: 98.3425% ( 1) 00:09:48.551 5.879 - 5.902: 98.3499% ( 1) 00:09:48.551 5.926 - 5.950: 98.3573% ( 1) 00:09:48.551 5.950 - 5.973: 98.3720% ( 2) 00:09:48.551 6.021 - 6.044: 98.3867% ( 2) 00:09:48.551 6.116 - 6.163: 98.3941% ( 1) 00:09:48.551 6.163 - 6.210: 98.4015% ( 1) 00:09:48.551 6.305 - 6.353: 98.4088% ( 1) 00:09:48.551 6.447 - 6.495: 98.4162% ( 1) 00:09:48.551 6.542 - 6.590: 98.4309% ( 2) 00:09:48.551 6.590 - 6.637: 98.4383% ( 1) 00:09:48.551 6.779 - 6.827: 98.4530% ( 2) 00:09:48.551 7.016 - 7.064: 98.4604% ( 1) 00:09:48.551 7.301 - 7.348: 98.4678% ( 1) 00:09:48.551 7.348 - 7.396: 98.4899% ( 3) 00:09:48.551 7.396 - 7.443: 98.4972% ( 1) 00:09:48.551 7.538 - 7.585: 98.5046% ( 1) 00:09:48.551 7.633 - 7.680: 98.5267% ( 3) 00:09:48.551 7.680 - 7.727: 98.5414% ( 2) 00:09:48.551 7.727 - 7.775: 98.5488% ( 1) 00:09:48.551 7.775 - 7.822: 98.5709% ( 3) 00:09:48.551 7.822 - 7.870: 98.5783% ( 1) 00:09:48.551 7.917 - 7.964: 98.5930% ( 2) 00:09:48.551 7.964 - 8.012: 98.6004% ( 1) 00:09:48.551 8.012 - 8.059: 98.6077% ( 1) 00:09:48.551 8.059 - 8.107: 98.6372% ( 4) 00:09:48.551 8.201 - 8.249: 98.6446% ( 1) 00:09:48.552 8.249 - 8.296: 98.6667% ( 3) 00:09:48.552 8.296 - 8.344: 98.6814% ( 2) 00:09:48.552 8.391 - 8.439: 98.6888% ( 1) 00:09:48.552 8.439 - 8.486: 98.7035% ( 2) 00:09:48.552 8.581 - 8.628: 98.7109% ( 1) 00:09:48.552 8.676 - 8.723: 98.7182% ( 1) 00:09:48.552 8.818 - 8.865: 98.7256% ( 1) 00:09:48.552 8.865 - 8.913: 98.7330% ( 1) 00:09:48.552 9.150 - 9.197: 98.7403% ( 1) 00:09:48.552 9.244 - 9.292: 98.7477% ( 1) 00:09:48.552 9.387 - 9.434: 98.7551% ( 1) 00:09:48.552 9.576 - 9.624: 98.7624% ( 1) 00:09:48.552 9.671 - 9.719: 98.7698% ( 1) 00:09:48.552 9.719 - 9.766: 98.7772% ( 1) 00:09:48.552 9.956 - 10.003: 98.7845% ( 1) 00:09:48.552 10.145 - 10.193: 98.7919% ( 1) 00:09:48.552 10.240 - 10.287: 98.7993% ( 1) 00:09:48.552 10.335 - 10.382: 98.8140% ( 2) 00:09:48.552 10.572 - 10.619: 98.8214% ( 1) 00:09:48.552 10.667 - 10.714: 98.8287% ( 1) 00:09:48.552 11.046 - 11.093: 98.8361% ( 1) 00:09:48.552 11.473 - 11.520: 98.8435% ( 1) 00:09:48.552 11.804 - 11.852: 98.8508% ( 1) 00:09:48.552 12.136 - 12.231: 98.8582% ( 1) 00:09:48.552 12.800 - 12.895: 98.8656% ( 1) 00:09:48.552 13.084 - 13.179: 98.8729% ( 1) 00:09:48.552 13.464 - 13.559: 98.8803% ( 1) 00:09:48.552 13.748 - 13.843: 98.8950% ( 2) 00:09:48.552 14.507 - 14.601: 98.9024% ( 1) 00:09:48.552 15.265 - 15.360: 98.9098% ( 1) 00:09:48.552 15.739 - 15.834: 98.9171% ( 1) 00:09:48.552 16.687 - 16.782: 98.9245% ( 1) 00:09:48.552 17.067 - 17.161: 98.9319% ( 1) 00:09:48.552 17.256 - 17.351: 98.9392% ( 1) 00:09:48.552 17.351 - 17.446: 98.9613% ( 3) 00:09:48.552 17.446 - 17.541: 98.9761% ( 2) 00:09:48.552 17.541 - 17.636: 99.0129% ( 5) 00:09:48.552 17.636 - 17.730: 99.0497% ( 5) 00:09:48.552 17.730 - 17.825: 99.0939% ( 6) 00:09:48.552 17.825 - 17.920: 99.1381% ( 6) 00:09:48.552 17.920 - 18.015: 99.1750% ( 5) 00:09:48.552 18.015 - 18.110: 99.2192% ( 6) 00:09:48.552 18.110 - 18.204: 99.3149% ( 13) 00:09:48.552 18.204 - 18.299: 99.3738% ( 8) 00:09:48.552 18.299 - 18.394: 99.4180% ( 6) 00:09:48.552 18.394 - 18.489: 99.4843% ( 9) 00:09:48.552 18.489 - 18.584: 99.5359% ( 7) 00:09:48.552 18.584 - 18.679: 99.6243% ( 12) 00:09:48.552 18.679 - 18.773: 99.6832% ( 8) 00:09:48.552 18.773 - 18.868: 99.7053% ( 3) 00:09:48.552 18.868 - 18.963: 99.7495% ( 6) 00:09:48.552 18.963 - 19.058: 99.8011% ( 7) 00:09:48.552 19.058 - 19.153: 99.8232% ( 3) 00:09:48.552 19.153 - 19.247: 99.8453% ( 3) 00:09:48.552 19.247 - 19.342: 99.8600% ( 2) 00:09:48.552 19.342 - 19.437: 99.8674% ( 1) 00:09:48.552 19.627 - 19.721: 99.8821% ( 2) 00:09:48.552 19.721 - 19.816: 99.8969% ( 2) 00:09:48.552 20.290 - 20.385: 99.9042% ( 1) 00:09:48.552 20.764 - 20.859: 99.9116% ( 1) 00:09:48.552 23.893 - 23.988: 99.9190% ( 1) 00:09:48.552 24.652 - 24.841: 99.9263% ( 1) 00:09:48.552 28.634 - 28.824: 99.9337% ( 1) 00:09:48.552 29.582 - 29.772: 99.9411% ( 1) 00:09:48.552 3422.436 - 3446.708: 99.9484% ( 1) 00:09:48.552 3980.705 - 4004.978: 99.9779% ( 4) 00:09:48.552 4004.978 - 4029.250: 99.9926% ( 2) 00:09:48.552 4975.881 - 5000.154: 100.0000% ( 1) 00:09:48.552 00:09:48.552 Complete histogram 00:09:48.552 ================== 00:09:48.552 Range in us Cumulative Count 00:09:48.552 2.062 - 2.074: 0.9650% ( 131) 00:09:48.552 2.074 - 2.086: 30.5046% ( 4010) 00:09:48.552 2.086 - 2.098: 43.3223% ( 1740) 00:09:48.552 2.098 - 2.110: 46.5414% ( 437) 00:09:48.552 2.110 - 2.121: 54.9613% ( 1143) 00:09:48.552 2.121 - 2.133: 57.8269% ( 389) 00:09:48.552 2.133 - 2.145: 61.8932% ( 552) 00:09:48.552 2.145 - 2.157: 72.7882% ( 1479) 00:09:48.552 2.157 - 2.169: 75.1087% ( 315) 00:09:48.552 2.169 - 2.181: 77.1123% ( 272) 00:09:48.552 2.181 - 2.193: 80.4788% ( 457) 00:09:48.552 2.193 - 2.204: 81.4365% ( 130) 00:09:48.552 2.204 - 2.216: 82.6593% ( 166) 00:09:48.552 2.216 - 2.228: 87.0350% ( 594) 00:09:48.552 2.228 - 2.240: 88.9208% ( 256) 00:09:48.552 2.240 - 2.252: 90.9245% ( 272) 00:09:48.552 2.252 - 2.264: 92.8103% ( 256) 00:09:48.552 2.264 - 2.276: 93.3775% ( 77) 00:09:48.552 2.276 - 2.287: 93.7459% ( 50) 00:09:48.552 2.287 - 2.299: 94.0405% ( 40) 00:09:48.552 2.299 - 2.311: 94.4383% ( 54) 00:09:48.552 2.311 - 2.323: 95.1455% ( 96) 00:09:48.552 2.323 - 2.335: 95.3002% ( 21) 00:09:48.552 2.335 - 2.347: 95.3812% ( 11) 00:09:48.552 2.347 - 2.359: 95.4770% ( 13) 00:09:48.552 2.359 - 2.370: 95.6611% ( 25) 00:09:48.552 2.370 - 2.382: 95.8306% ( 23) 00:09:48.552 2.382 - 2.394: 96.1989% ( 50) 00:09:48.552 2.394 - 2.406: 96.4567% ( 35) 00:09:48.552 2.406 - 2.418: 96.6924% ( 32) 00:09:48.552 2.418 - 2.430: 96.8840% ( 26) 00:09:48.552 2.430 - 2.441: 97.0902% ( 28) 00:09:48.552 2.441 - 2.453: 97.2597% ( 23) 00:09:48.552 2.453 - 2.465: 97.3923% ( 18) 00:09:48.552 2.465 - 2.477: 97.4807% ( 12) 00:09:48.552 2.477 - 2.489: 97.5470% ( 9) 00:09:48.552 2.489 - 2.501: 97.6943% ( 20) 00:09:48.552 2.501 - 2.513: 97.7532% ( 8) 00:09:48.552 2.513 - 2.524: 97.8122% ( 8) 00:09:48.552 2.524 - 2.536: 97.8564% ( 6) 00:09:48.552 2.536 - 2.548: 97.8858% ( 4) 00:09:48.552 2.548 - 2.560: 97.9227% ( 5) 00:09:48.552 2.560 - 2.572: 97.9521% ( 4) 00:09:48.552 2.572 - 2.584: 97.9963% ( 6) 00:09:48.552 2.596 - 2.607: 98.0110% ( 2) 00:09:48.552 2.607 - 2.619: 98.0184% ( 1) 00:09:48.552 2.619 - 2.631: 98.0258% ( 1) 00:09:48.552 2.631 - 2.643: 98.0405% ( 2) 00:09:48.552 2.643 - 2.655: 98.0626% ( 3) 00:09:48.552 2.655 - 2.667: 98.0773% ( 2) 00:09:48.552 2.667 - 2.679: 98.0847% ( 1) 00:09:48.552 2.690 - 2.702: 98.1068% ( 3) 00:09:48.552 2.702 - 2.714: 98.1215% ( 2) 00:09:48.552 2.714 - 2.726: 98.1363% ( 2) 00:09:48.552 2.738 - 2.750: 98.1584% ( 3) 00:09:48.552 2.773 - 2.785: 98.1657% ( 1) 00:09:48.552 2.785 - 2.797: 98.1805% ( 2) 00:09:48.552 2.809 - 2.821: 98.1878% ( 1) 00:09:48.552 2.821 - 2.833: 98.2099% ( 3) 00:09:48.552 2.844 - 2.856: 98.2247% ( 2) 00:09:48.552 2.856 - 2.868: 98.2468% ( 3) 00:09:48.552 2.868 - 2.880: 98.2689% ( 3) 00:09:48.552 2.880 - 2.892: 98.2836% ( 2) 00:09:48.552 2.892 - 2.904: 98.2983% ( 2) 00:09:48.552 2.904 - 2.916: 9[2024-07-12 11:14:14.523602] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:48.552 8.3057% ( 1) 00:09:48.552 2.951 - 2.963: 98.3131% ( 1) 00:09:48.552 2.963 - 2.975: 98.3278% ( 2) 00:09:48.552 2.975 - 2.987: 98.3573% ( 4) 00:09:48.552 2.987 - 2.999: 98.3646% ( 1) 00:09:48.552 2.999 - 3.010: 98.3720% ( 1) 00:09:48.552 3.010 - 3.022: 98.3867% ( 2) 00:09:48.552 3.058 - 3.081: 98.3941% ( 1) 00:09:48.552 3.081 - 3.105: 98.4015% ( 1) 00:09:48.552 3.129 - 3.153: 98.4088% ( 1) 00:09:48.552 3.153 - 3.176: 98.4162% ( 1) 00:09:48.552 3.176 - 3.200: 98.4309% ( 2) 00:09:48.552 3.200 - 3.224: 98.4457% ( 2) 00:09:48.552 3.247 - 3.271: 98.4678% ( 3) 00:09:48.552 3.319 - 3.342: 98.4825% ( 2) 00:09:48.552 3.390 - 3.413: 98.4899% ( 1) 00:09:48.552 3.413 - 3.437: 98.5046% ( 2) 00:09:48.552 3.437 - 3.461: 98.5193% ( 2) 00:09:48.552 3.461 - 3.484: 98.5562% ( 5) 00:09:48.552 3.484 - 3.508: 98.5635% ( 1) 00:09:48.552 3.508 - 3.532: 98.5709% ( 1) 00:09:48.552 3.579 - 3.603: 98.5783% ( 1) 00:09:48.552 3.603 - 3.627: 98.5856% ( 1) 00:09:48.552 3.627 - 3.650: 98.6077% ( 3) 00:09:48.552 3.674 - 3.698: 98.6151% ( 1) 00:09:48.552 3.698 - 3.721: 98.6225% ( 1) 00:09:48.552 3.721 - 3.745: 98.6298% ( 1) 00:09:48.552 3.769 - 3.793: 98.6372% ( 1) 00:09:48.552 3.793 - 3.816: 98.6446% ( 1) 00:09:48.552 3.887 - 3.911: 98.6593% ( 2) 00:09:48.552 3.911 - 3.935: 98.6740% ( 2) 00:09:48.552 3.982 - 4.006: 98.6814% ( 1) 00:09:48.552 4.030 - 4.053: 98.6888% ( 1) 00:09:48.552 4.077 - 4.101: 98.6961% ( 1) 00:09:48.552 5.120 - 5.144: 98.7035% ( 1) 00:09:48.552 5.310 - 5.333: 98.7109% ( 1) 00:09:48.552 5.428 - 5.452: 98.7182% ( 1) 00:09:48.552 5.641 - 5.665: 98.7330% ( 2) 00:09:48.552 5.736 - 5.760: 98.7403% ( 1) 00:09:48.552 5.784 - 5.807: 98.7477% ( 1) 00:09:48.552 5.807 - 5.831: 98.7551% ( 1) 00:09:48.552 5.902 - 5.926: 98.7698% ( 2) 00:09:48.552 6.116 - 6.163: 98.7772% ( 1) 00:09:48.552 6.163 - 6.210: 98.7919% ( 2) 00:09:48.552 6.210 - 6.258: 98.7993% ( 1) 00:09:48.552 6.258 - 6.305: 98.8066% ( 1) 00:09:48.552 6.305 - 6.353: 98.8140% ( 1) 00:09:48.552 6.637 - 6.684: 98.8214% ( 1) 00:09:48.552 7.159 - 7.206: 98.8287% ( 1) 00:09:48.552 8.012 - 8.059: 98.8361% ( 1) 00:09:48.552 8.533 - 8.581: 98.8435% ( 1) 00:09:48.552 15.455 - 15.550: 98.8508% ( 1) 00:09:48.552 15.644 - 15.739: 98.8582% ( 1) 00:09:48.552 15.739 - 15.834: 98.8656% ( 1) 00:09:48.552 15.834 - 15.929: 98.8803% ( 2) 00:09:48.552 15.929 - 16.024: 98.9098% ( 4) 00:09:48.552 16.024 - 16.119: 98.9171% ( 1) 00:09:48.552 16.119 - 16.213: 98.9392% ( 3) 00:09:48.552 16.213 - 16.308: 98.9908% ( 7) 00:09:48.552 16.308 - 16.403: 99.0055% ( 2) 00:09:48.552 16.403 - 16.498: 99.0497% ( 6) 00:09:48.552 16.498 - 16.593: 99.0718% ( 3) 00:09:48.552 16.593 - 16.687: 99.1455% ( 10) 00:09:48.553 16.687 - 16.782: 99.1750% ( 4) 00:09:48.553 16.782 - 16.877: 99.1971% ( 3) 00:09:48.553 16.877 - 16.972: 99.2044% ( 1) 00:09:48.553 16.972 - 17.067: 99.2413% ( 5) 00:09:48.553 17.067 - 17.161: 99.2486% ( 1) 00:09:48.553 17.161 - 17.256: 99.2634% ( 2) 00:09:48.553 17.256 - 17.351: 99.2928% ( 4) 00:09:48.553 17.446 - 17.541: 99.3002% ( 1) 00:09:48.553 17.541 - 17.636: 99.3076% ( 1) 00:09:48.553 18.015 - 18.110: 99.3149% ( 1) 00:09:48.553 18.204 - 18.299: 99.3223% ( 1) 00:09:48.553 18.679 - 18.773: 99.3297% ( 1) 00:09:48.553 18.773 - 18.868: 99.3370% ( 1) 00:09:48.553 19.721 - 19.816: 99.3444% ( 1) 00:09:48.553 26.359 - 26.548: 99.3517% ( 1) 00:09:48.553 2026.761 - 2038.898: 99.3591% ( 1) 00:09:48.553 3203.982 - 3228.255: 99.3665% ( 1) 00:09:48.553 3980.705 - 4004.978: 99.9116% ( 74) 00:09:48.553 4004.978 - 4029.250: 99.9853% ( 10) 00:09:48.553 4029.250 - 4053.523: 99.9926% ( 1) 00:09:48.553 5995.330 - 6019.603: 100.0000% ( 1) 00:09:48.553 00:09:48.553 11:14:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:09:48.553 11:14:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:09:48.553 11:14:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:09:48.553 11:14:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:09:48.553 11:14:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:09:48.810 [ 00:09:48.810 { 00:09:48.810 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:48.810 "subtype": "Discovery", 00:09:48.810 "listen_addresses": [], 00:09:48.810 "allow_any_host": true, 00:09:48.810 "hosts": [] 00:09:48.810 }, 00:09:48.810 { 00:09:48.810 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:09:48.810 "subtype": "NVMe", 00:09:48.810 "listen_addresses": [ 00:09:48.810 { 00:09:48.810 "trtype": "VFIOUSER", 00:09:48.810 "adrfam": "IPv4", 00:09:48.810 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:09:48.810 "trsvcid": "0" 00:09:48.810 } 00:09:48.810 ], 00:09:48.810 "allow_any_host": true, 00:09:48.810 "hosts": [], 00:09:48.810 "serial_number": "SPDK1", 00:09:48.810 "model_number": "SPDK bdev Controller", 00:09:48.810 "max_namespaces": 32, 00:09:48.810 "min_cntlid": 1, 00:09:48.810 "max_cntlid": 65519, 00:09:48.810 "namespaces": [ 00:09:48.810 { 00:09:48.810 "nsid": 1, 00:09:48.810 "bdev_name": "Malloc1", 00:09:48.810 "name": "Malloc1", 00:09:48.810 "nguid": "D038460546ED4E4392AD874A09155914", 00:09:48.810 "uuid": "d0384605-46ed-4e43-92ad-874a09155914" 00:09:48.810 } 00:09:48.810 ] 00:09:48.810 }, 00:09:48.810 { 00:09:48.810 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:09:48.810 "subtype": "NVMe", 00:09:48.810 "listen_addresses": [ 00:09:48.810 { 00:09:48.810 "trtype": "VFIOUSER", 00:09:48.810 "adrfam": "IPv4", 00:09:48.810 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:09:48.810 "trsvcid": "0" 00:09:48.810 } 00:09:48.810 ], 00:09:48.810 "allow_any_host": true, 00:09:48.810 "hosts": [], 00:09:48.810 "serial_number": "SPDK2", 00:09:48.810 "model_number": "SPDK bdev Controller", 00:09:48.810 "max_namespaces": 32, 00:09:48.811 "min_cntlid": 1, 00:09:48.811 "max_cntlid": 65519, 00:09:48.811 "namespaces": [ 00:09:48.811 { 00:09:48.811 "nsid": 1, 00:09:48.811 "bdev_name": "Malloc2", 00:09:48.811 "name": "Malloc2", 00:09:48.811 "nguid": "2149FD1DCD2643298ED6E73DFF20A51F", 00:09:48.811 "uuid": "2149fd1d-cd26-4329-8ed6-e73dff20a51f" 00:09:48.811 } 00:09:48.811 ] 00:09:48.811 } 00:09:48.811 ] 00:09:48.811 11:14:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:09:48.811 11:14:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=519700 00:09:48.811 11:14:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:09:48.811 11:14:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:09:48.811 11:14:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:09:48.811 11:14:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:09:48.811 11:14:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:09:48.811 11:14:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:09:48.811 11:14:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:09:48.811 11:14:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:09:48.811 EAL: No free 2048 kB hugepages reported on node 1 00:09:49.069 [2024-07-12 11:14:15.037339] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:49.069 Malloc3 00:09:49.069 11:14:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:09:49.325 [2024-07-12 11:14:15.421125] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:49.325 11:14:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:09:49.582 Asynchronous Event Request test 00:09:49.582 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:49.582 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:49.582 Registering asynchronous event callbacks... 00:09:49.582 Starting namespace attribute notice tests for all controllers... 00:09:49.582 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:09:49.582 aer_cb - Changed Namespace 00:09:49.582 Cleaning up... 00:09:49.582 [ 00:09:49.582 { 00:09:49.582 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:49.582 "subtype": "Discovery", 00:09:49.582 "listen_addresses": [], 00:09:49.582 "allow_any_host": true, 00:09:49.582 "hosts": [] 00:09:49.582 }, 00:09:49.582 { 00:09:49.582 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:09:49.582 "subtype": "NVMe", 00:09:49.582 "listen_addresses": [ 00:09:49.582 { 00:09:49.582 "trtype": "VFIOUSER", 00:09:49.582 "adrfam": "IPv4", 00:09:49.582 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:09:49.582 "trsvcid": "0" 00:09:49.582 } 00:09:49.582 ], 00:09:49.582 "allow_any_host": true, 00:09:49.582 "hosts": [], 00:09:49.582 "serial_number": "SPDK1", 00:09:49.582 "model_number": "SPDK bdev Controller", 00:09:49.582 "max_namespaces": 32, 00:09:49.582 "min_cntlid": 1, 00:09:49.582 "max_cntlid": 65519, 00:09:49.582 "namespaces": [ 00:09:49.582 { 00:09:49.582 "nsid": 1, 00:09:49.582 "bdev_name": "Malloc1", 00:09:49.582 "name": "Malloc1", 00:09:49.582 "nguid": "D038460546ED4E4392AD874A09155914", 00:09:49.582 "uuid": "d0384605-46ed-4e43-92ad-874a09155914" 00:09:49.582 }, 00:09:49.582 { 00:09:49.582 "nsid": 2, 00:09:49.582 "bdev_name": "Malloc3", 00:09:49.582 "name": "Malloc3", 00:09:49.582 "nguid": "D2F0230FAB8D4B77A5CC2DAC4E87DFE9", 00:09:49.582 "uuid": "d2f0230f-ab8d-4b77-a5cc-2dac4e87dfe9" 00:09:49.582 } 00:09:49.582 ] 00:09:49.582 }, 00:09:49.582 { 00:09:49.582 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:09:49.582 "subtype": "NVMe", 00:09:49.582 "listen_addresses": [ 00:09:49.582 { 00:09:49.582 "trtype": "VFIOUSER", 00:09:49.582 "adrfam": "IPv4", 00:09:49.582 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:09:49.582 "trsvcid": "0" 00:09:49.582 } 00:09:49.582 ], 00:09:49.582 "allow_any_host": true, 00:09:49.582 "hosts": [], 00:09:49.582 "serial_number": "SPDK2", 00:09:49.582 "model_number": "SPDK bdev Controller", 00:09:49.582 "max_namespaces": 32, 00:09:49.582 "min_cntlid": 1, 00:09:49.582 "max_cntlid": 65519, 00:09:49.582 "namespaces": [ 00:09:49.582 { 00:09:49.582 "nsid": 1, 00:09:49.582 "bdev_name": "Malloc2", 00:09:49.582 "name": "Malloc2", 00:09:49.582 "nguid": "2149FD1DCD2643298ED6E73DFF20A51F", 00:09:49.582 "uuid": "2149fd1d-cd26-4329-8ed6-e73dff20a51f" 00:09:49.582 } 00:09:49.582 ] 00:09:49.582 } 00:09:49.582 ] 00:09:49.582 11:14:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 519700 00:09:49.582 11:14:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:49.582 11:14:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:09:49.582 11:14:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:09:49.582 11:14:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:09:49.582 [2024-07-12 11:14:15.703914] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:09:49.582 [2024-07-12 11:14:15.703957] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid519839 ] 00:09:49.841 EAL: No free 2048 kB hugepages reported on node 1 00:09:49.841 [2024-07-12 11:14:15.739010] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:09:49.841 [2024-07-12 11:14:15.747221] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:49.841 [2024-07-12 11:14:15.747252] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fd948311000 00:09:49.841 [2024-07-12 11:14:15.748237] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:49.841 [2024-07-12 11:14:15.749224] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:49.841 [2024-07-12 11:14:15.750240] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:49.841 [2024-07-12 11:14:15.751236] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:49.841 [2024-07-12 11:14:15.752242] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:49.841 [2024-07-12 11:14:15.753248] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:49.841 [2024-07-12 11:14:15.754262] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:49.841 [2024-07-12 11:14:15.755269] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:49.841 [2024-07-12 11:14:15.756281] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:49.841 [2024-07-12 11:14:15.756303] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fd948306000 00:09:49.841 [2024-07-12 11:14:15.757417] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:49.841 [2024-07-12 11:14:15.769503] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:09:49.841 [2024-07-12 11:14:15.769536] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:09:49.841 [2024-07-12 11:14:15.778669] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:09:49.841 [2024-07-12 11:14:15.778723] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:09:49.841 [2024-07-12 11:14:15.778810] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:09:49.841 [2024-07-12 11:14:15.778835] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:09:49.841 [2024-07-12 11:14:15.778845] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:09:49.841 [2024-07-12 11:14:15.779673] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:09:49.841 [2024-07-12 11:14:15.779694] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:09:49.841 [2024-07-12 11:14:15.779707] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:09:49.841 [2024-07-12 11:14:15.780675] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:09:49.841 [2024-07-12 11:14:15.780695] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:09:49.841 [2024-07-12 11:14:15.780709] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:09:49.841 [2024-07-12 11:14:15.781685] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:09:49.841 [2024-07-12 11:14:15.781707] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:09:49.841 [2024-07-12 11:14:15.782689] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:09:49.841 [2024-07-12 11:14:15.782709] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:09:49.841 [2024-07-12 11:14:15.782718] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:09:49.841 [2024-07-12 11:14:15.782734] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:09:49.841 [2024-07-12 11:14:15.782843] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:09:49.841 [2024-07-12 11:14:15.782873] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:09:49.841 [2024-07-12 11:14:15.782882] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:09:49.841 [2024-07-12 11:14:15.783697] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:09:49.842 [2024-07-12 11:14:15.784699] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:09:49.842 [2024-07-12 11:14:15.785710] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:09:49.842 [2024-07-12 11:14:15.786710] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:09:49.842 [2024-07-12 11:14:15.786780] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:09:49.842 [2024-07-12 11:14:15.787722] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:09:49.842 [2024-07-12 11:14:15.787742] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:09:49.842 [2024-07-12 11:14:15.787751] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:09:49.842 [2024-07-12 11:14:15.787774] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:09:49.842 [2024-07-12 11:14:15.787787] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:09:49.842 [2024-07-12 11:14:15.787808] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:49.842 [2024-07-12 11:14:15.787817] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:49.842 [2024-07-12 11:14:15.787835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:49.842 [2024-07-12 11:14:15.791884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:09:49.842 [2024-07-12 11:14:15.791907] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:09:49.842 [2024-07-12 11:14:15.791921] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:09:49.842 [2024-07-12 11:14:15.791930] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:09:49.842 [2024-07-12 11:14:15.791938] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:09:49.842 [2024-07-12 11:14:15.791946] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:09:49.842 [2024-07-12 11:14:15.791954] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:09:49.842 [2024-07-12 11:14:15.791962] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:09:49.842 [2024-07-12 11:14:15.791979] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:09:49.842 [2024-07-12 11:14:15.791996] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:09:49.842 [2024-07-12 11:14:15.799876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:09:49.842 [2024-07-12 11:14:15.799904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:09:49.842 [2024-07-12 11:14:15.799919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:09:49.842 [2024-07-12 11:14:15.799931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:09:49.842 [2024-07-12 11:14:15.799943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:09:49.842 [2024-07-12 11:14:15.799952] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:09:49.842 [2024-07-12 11:14:15.799967] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:09:49.842 [2024-07-12 11:14:15.799982] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:09:49.842 [2024-07-12 11:14:15.807877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:09:49.842 [2024-07-12 11:14:15.807895] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:09:49.842 [2024-07-12 11:14:15.807904] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:09:49.842 [2024-07-12 11:14:15.807915] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:09:49.842 [2024-07-12 11:14:15.807925] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:09:49.842 [2024-07-12 11:14:15.807939] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:49.842 [2024-07-12 11:14:15.815875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:09:49.842 [2024-07-12 11:14:15.815946] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:09:49.842 [2024-07-12 11:14:15.815963] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:09:49.842 [2024-07-12 11:14:15.815977] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:09:49.842 [2024-07-12 11:14:15.815986] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:09:49.842 [2024-07-12 11:14:15.815996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:09:49.842 [2024-07-12 11:14:15.823891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:09:49.842 [2024-07-12 11:14:15.823915] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:09:49.842 [2024-07-12 11:14:15.823935] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:09:49.842 [2024-07-12 11:14:15.823954] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:09:49.842 [2024-07-12 11:14:15.823968] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:49.842 [2024-07-12 11:14:15.823976] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:49.842 [2024-07-12 11:14:15.823986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:49.842 [2024-07-12 11:14:15.831877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:09:49.842 [2024-07-12 11:14:15.831905] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:09:49.842 [2024-07-12 11:14:15.831921] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:09:49.842 [2024-07-12 11:14:15.831935] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:49.842 [2024-07-12 11:14:15.831944] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:49.842 [2024-07-12 11:14:15.831953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:49.842 [2024-07-12 11:14:15.839874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:09:49.842 [2024-07-12 11:14:15.839896] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:09:49.842 [2024-07-12 11:14:15.839909] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:09:49.843 [2024-07-12 11:14:15.839923] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:09:49.843 [2024-07-12 11:14:15.839934] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:09:49.843 [2024-07-12 11:14:15.839942] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:09:49.843 [2024-07-12 11:14:15.839951] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:09:49.843 [2024-07-12 11:14:15.839959] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:09:49.843 [2024-07-12 11:14:15.839966] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:09:49.843 [2024-07-12 11:14:15.839974] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:09:49.843 [2024-07-12 11:14:15.840000] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:09:49.843 [2024-07-12 11:14:15.847879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:09:49.843 [2024-07-12 11:14:15.847905] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:09:49.843 [2024-07-12 11:14:15.855876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:09:49.843 [2024-07-12 11:14:15.855911] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:09:49.843 [2024-07-12 11:14:15.863874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:09:49.843 [2024-07-12 11:14:15.863901] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:49.843 [2024-07-12 11:14:15.871877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:09:49.843 [2024-07-12 11:14:15.871909] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:09:49.843 [2024-07-12 11:14:15.871920] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:09:49.843 [2024-07-12 11:14:15.871927] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:09:49.843 [2024-07-12 11:14:15.871933] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:09:49.843 [2024-07-12 11:14:15.871942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:09:49.843 [2024-07-12 11:14:15.871955] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:09:49.843 [2024-07-12 11:14:15.871963] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:09:49.843 [2024-07-12 11:14:15.871972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:09:49.843 [2024-07-12 11:14:15.871984] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:09:49.843 [2024-07-12 11:14:15.871991] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:49.843 [2024-07-12 11:14:15.872000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:49.843 [2024-07-12 11:14:15.872012] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:09:49.843 [2024-07-12 11:14:15.872020] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:09:49.843 [2024-07-12 11:14:15.872029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:09:49.843 [2024-07-12 11:14:15.879877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:09:49.843 [2024-07-12 11:14:15.879906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:09:49.843 [2024-07-12 11:14:15.879924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:09:49.843 [2024-07-12 11:14:15.879937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:09:49.843 ===================================================== 00:09:49.843 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:09:49.843 ===================================================== 00:09:49.843 Controller Capabilities/Features 00:09:49.843 ================================ 00:09:49.843 Vendor ID: 4e58 00:09:49.843 Subsystem Vendor ID: 4e58 00:09:49.843 Serial Number: SPDK2 00:09:49.843 Model Number: SPDK bdev Controller 00:09:49.843 Firmware Version: 24.09 00:09:49.843 Recommended Arb Burst: 6 00:09:49.843 IEEE OUI Identifier: 8d 6b 50 00:09:49.843 Multi-path I/O 00:09:49.843 May have multiple subsystem ports: Yes 00:09:49.843 May have multiple controllers: Yes 00:09:49.843 Associated with SR-IOV VF: No 00:09:49.843 Max Data Transfer Size: 131072 00:09:49.843 Max Number of Namespaces: 32 00:09:49.843 Max Number of I/O Queues: 127 00:09:49.843 NVMe Specification Version (VS): 1.3 00:09:49.843 NVMe Specification Version (Identify): 1.3 00:09:49.843 Maximum Queue Entries: 256 00:09:49.843 Contiguous Queues Required: Yes 00:09:49.843 Arbitration Mechanisms Supported 00:09:49.843 Weighted Round Robin: Not Supported 00:09:49.843 Vendor Specific: Not Supported 00:09:49.843 Reset Timeout: 15000 ms 00:09:49.843 Doorbell Stride: 4 bytes 00:09:49.843 NVM Subsystem Reset: Not Supported 00:09:49.843 Command Sets Supported 00:09:49.843 NVM Command Set: Supported 00:09:49.843 Boot Partition: Not Supported 00:09:49.843 Memory Page Size Minimum: 4096 bytes 00:09:49.843 Memory Page Size Maximum: 4096 bytes 00:09:49.843 Persistent Memory Region: Not Supported 00:09:49.843 Optional Asynchronous Events Supported 00:09:49.843 Namespace Attribute Notices: Supported 00:09:49.843 Firmware Activation Notices: Not Supported 00:09:49.843 ANA Change Notices: Not Supported 00:09:49.843 PLE Aggregate Log Change Notices: Not Supported 00:09:49.843 LBA Status Info Alert Notices: Not Supported 00:09:49.843 EGE Aggregate Log Change Notices: Not Supported 00:09:49.843 Normal NVM Subsystem Shutdown event: Not Supported 00:09:49.843 Zone Descriptor Change Notices: Not Supported 00:09:49.843 Discovery Log Change Notices: Not Supported 00:09:49.843 Controller Attributes 00:09:49.843 128-bit Host Identifier: Supported 00:09:49.843 Non-Operational Permissive Mode: Not Supported 00:09:49.843 NVM Sets: Not Supported 00:09:49.843 Read Recovery Levels: Not Supported 00:09:49.843 Endurance Groups: Not Supported 00:09:49.843 Predictable Latency Mode: Not Supported 00:09:49.843 Traffic Based Keep ALive: Not Supported 00:09:49.843 Namespace Granularity: Not Supported 00:09:49.843 SQ Associations: Not Supported 00:09:49.843 UUID List: Not Supported 00:09:49.843 Multi-Domain Subsystem: Not Supported 00:09:49.843 Fixed Capacity Management: Not Supported 00:09:49.843 Variable Capacity Management: Not Supported 00:09:49.844 Delete Endurance Group: Not Supported 00:09:49.844 Delete NVM Set: Not Supported 00:09:49.844 Extended LBA Formats Supported: Not Supported 00:09:49.844 Flexible Data Placement Supported: Not Supported 00:09:49.844 00:09:49.844 Controller Memory Buffer Support 00:09:49.844 ================================ 00:09:49.844 Supported: No 00:09:49.844 00:09:49.844 Persistent Memory Region Support 00:09:49.844 ================================ 00:09:49.844 Supported: No 00:09:49.844 00:09:49.844 Admin Command Set Attributes 00:09:49.844 ============================ 00:09:49.844 Security Send/Receive: Not Supported 00:09:49.844 Format NVM: Not Supported 00:09:49.844 Firmware Activate/Download: Not Supported 00:09:49.844 Namespace Management: Not Supported 00:09:49.844 Device Self-Test: Not Supported 00:09:49.844 Directives: Not Supported 00:09:49.844 NVMe-MI: Not Supported 00:09:49.844 Virtualization Management: Not Supported 00:09:49.844 Doorbell Buffer Config: Not Supported 00:09:49.844 Get LBA Status Capability: Not Supported 00:09:49.844 Command & Feature Lockdown Capability: Not Supported 00:09:49.844 Abort Command Limit: 4 00:09:49.844 Async Event Request Limit: 4 00:09:49.844 Number of Firmware Slots: N/A 00:09:49.844 Firmware Slot 1 Read-Only: N/A 00:09:49.844 Firmware Activation Without Reset: N/A 00:09:49.844 Multiple Update Detection Support: N/A 00:09:49.844 Firmware Update Granularity: No Information Provided 00:09:49.844 Per-Namespace SMART Log: No 00:09:49.844 Asymmetric Namespace Access Log Page: Not Supported 00:09:49.844 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:09:49.844 Command Effects Log Page: Supported 00:09:49.844 Get Log Page Extended Data: Supported 00:09:49.844 Telemetry Log Pages: Not Supported 00:09:49.844 Persistent Event Log Pages: Not Supported 00:09:49.844 Supported Log Pages Log Page: May Support 00:09:49.844 Commands Supported & Effects Log Page: Not Supported 00:09:49.844 Feature Identifiers & Effects Log Page:May Support 00:09:49.844 NVMe-MI Commands & Effects Log Page: May Support 00:09:49.844 Data Area 4 for Telemetry Log: Not Supported 00:09:49.844 Error Log Page Entries Supported: 128 00:09:49.844 Keep Alive: Supported 00:09:49.844 Keep Alive Granularity: 10000 ms 00:09:49.844 00:09:49.844 NVM Command Set Attributes 00:09:49.844 ========================== 00:09:49.844 Submission Queue Entry Size 00:09:49.844 Max: 64 00:09:49.844 Min: 64 00:09:49.844 Completion Queue Entry Size 00:09:49.844 Max: 16 00:09:49.844 Min: 16 00:09:49.844 Number of Namespaces: 32 00:09:49.844 Compare Command: Supported 00:09:49.844 Write Uncorrectable Command: Not Supported 00:09:49.844 Dataset Management Command: Supported 00:09:49.844 Write Zeroes Command: Supported 00:09:49.844 Set Features Save Field: Not Supported 00:09:49.844 Reservations: Not Supported 00:09:49.844 Timestamp: Not Supported 00:09:49.844 Copy: Supported 00:09:49.844 Volatile Write Cache: Present 00:09:49.844 Atomic Write Unit (Normal): 1 00:09:49.844 Atomic Write Unit (PFail): 1 00:09:49.844 Atomic Compare & Write Unit: 1 00:09:49.844 Fused Compare & Write: Supported 00:09:49.844 Scatter-Gather List 00:09:49.844 SGL Command Set: Supported (Dword aligned) 00:09:49.844 SGL Keyed: Not Supported 00:09:49.844 SGL Bit Bucket Descriptor: Not Supported 00:09:49.844 SGL Metadata Pointer: Not Supported 00:09:49.844 Oversized SGL: Not Supported 00:09:49.844 SGL Metadata Address: Not Supported 00:09:49.844 SGL Offset: Not Supported 00:09:49.844 Transport SGL Data Block: Not Supported 00:09:49.844 Replay Protected Memory Block: Not Supported 00:09:49.844 00:09:49.844 Firmware Slot Information 00:09:49.844 ========================= 00:09:49.844 Active slot: 1 00:09:49.844 Slot 1 Firmware Revision: 24.09 00:09:49.844 00:09:49.844 00:09:49.844 Commands Supported and Effects 00:09:49.844 ============================== 00:09:49.844 Admin Commands 00:09:49.844 -------------- 00:09:49.844 Get Log Page (02h): Supported 00:09:49.844 Identify (06h): Supported 00:09:49.844 Abort (08h): Supported 00:09:49.844 Set Features (09h): Supported 00:09:49.844 Get Features (0Ah): Supported 00:09:49.844 Asynchronous Event Request (0Ch): Supported 00:09:49.844 Keep Alive (18h): Supported 00:09:49.844 I/O Commands 00:09:49.844 ------------ 00:09:49.844 Flush (00h): Supported LBA-Change 00:09:49.844 Write (01h): Supported LBA-Change 00:09:49.844 Read (02h): Supported 00:09:49.844 Compare (05h): Supported 00:09:49.844 Write Zeroes (08h): Supported LBA-Change 00:09:49.844 Dataset Management (09h): Supported LBA-Change 00:09:49.844 Copy (19h): Supported LBA-Change 00:09:49.844 00:09:49.844 Error Log 00:09:49.844 ========= 00:09:49.844 00:09:49.844 Arbitration 00:09:49.844 =========== 00:09:49.844 Arbitration Burst: 1 00:09:49.844 00:09:49.844 Power Management 00:09:49.844 ================ 00:09:49.844 Number of Power States: 1 00:09:49.844 Current Power State: Power State #0 00:09:49.844 Power State #0: 00:09:49.844 Max Power: 0.00 W 00:09:49.844 Non-Operational State: Operational 00:09:49.844 Entry Latency: Not Reported 00:09:49.844 Exit Latency: Not Reported 00:09:49.844 Relative Read Throughput: 0 00:09:49.844 Relative Read Latency: 0 00:09:49.844 Relative Write Throughput: 0 00:09:49.844 Relative Write Latency: 0 00:09:49.844 Idle Power: Not Reported 00:09:49.844 Active Power: Not Reported 00:09:49.844 Non-Operational Permissive Mode: Not Supported 00:09:49.844 00:09:49.845 Health Information 00:09:49.845 ================== 00:09:49.845 Critical Warnings: 00:09:49.845 Available Spare Space: OK 00:09:49.845 Temperature: OK 00:09:49.845 Device Reliability: OK 00:09:49.845 Read Only: No 00:09:49.845 Volatile Memory Backup: OK 00:09:49.845 Current Temperature: 0 Kelvin (-273 Celsius) 00:09:49.845 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:09:49.845 Available Spare: 0% 00:09:49.845 Available Sp[2024-07-12 11:14:15.880051] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:09:49.845 [2024-07-12 11:14:15.887875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:09:49.845 [2024-07-12 11:14:15.887926] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:09:49.845 [2024-07-12 11:14:15.887945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.845 [2024-07-12 11:14:15.887956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.845 [2024-07-12 11:14:15.887967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.845 [2024-07-12 11:14:15.887977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.845 [2024-07-12 11:14:15.888071] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:09:49.845 [2024-07-12 11:14:15.888093] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:09:49.845 [2024-07-12 11:14:15.889073] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:09:49.845 [2024-07-12 11:14:15.889146] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:09:49.845 [2024-07-12 11:14:15.889162] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:09:49.845 [2024-07-12 11:14:15.890076] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:09:49.845 [2024-07-12 11:14:15.890100] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:09:49.845 [2024-07-12 11:14:15.890167] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:09:49.845 [2024-07-12 11:14:15.892878] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:49.845 are Threshold: 0% 00:09:49.845 Life Percentage Used: 0% 00:09:49.845 Data Units Read: 0 00:09:49.845 Data Units Written: 0 00:09:49.845 Host Read Commands: 0 00:09:49.845 Host Write Commands: 0 00:09:49.845 Controller Busy Time: 0 minutes 00:09:49.845 Power Cycles: 0 00:09:49.845 Power On Hours: 0 hours 00:09:49.845 Unsafe Shutdowns: 0 00:09:49.845 Unrecoverable Media Errors: 0 00:09:49.845 Lifetime Error Log Entries: 0 00:09:49.845 Warning Temperature Time: 0 minutes 00:09:49.845 Critical Temperature Time: 0 minutes 00:09:49.845 00:09:49.845 Number of Queues 00:09:49.845 ================ 00:09:49.845 Number of I/O Submission Queues: 127 00:09:49.845 Number of I/O Completion Queues: 127 00:09:49.845 00:09:49.845 Active Namespaces 00:09:49.845 ================= 00:09:49.845 Namespace ID:1 00:09:49.845 Error Recovery Timeout: Unlimited 00:09:49.845 Command Set Identifier: NVM (00h) 00:09:49.845 Deallocate: Supported 00:09:49.845 Deallocated/Unwritten Error: Not Supported 00:09:49.845 Deallocated Read Value: Unknown 00:09:49.845 Deallocate in Write Zeroes: Not Supported 00:09:49.845 Deallocated Guard Field: 0xFFFF 00:09:49.845 Flush: Supported 00:09:49.845 Reservation: Supported 00:09:49.845 Namespace Sharing Capabilities: Multiple Controllers 00:09:49.845 Size (in LBAs): 131072 (0GiB) 00:09:49.845 Capacity (in LBAs): 131072 (0GiB) 00:09:49.845 Utilization (in LBAs): 131072 (0GiB) 00:09:49.845 NGUID: 2149FD1DCD2643298ED6E73DFF20A51F 00:09:49.845 UUID: 2149fd1d-cd26-4329-8ed6-e73dff20a51f 00:09:49.845 Thin Provisioning: Not Supported 00:09:49.845 Per-NS Atomic Units: Yes 00:09:49.845 Atomic Boundary Size (Normal): 0 00:09:49.845 Atomic Boundary Size (PFail): 0 00:09:49.845 Atomic Boundary Offset: 0 00:09:49.845 Maximum Single Source Range Length: 65535 00:09:49.845 Maximum Copy Length: 65535 00:09:49.845 Maximum Source Range Count: 1 00:09:49.845 NGUID/EUI64 Never Reused: No 00:09:49.845 Namespace Write Protected: No 00:09:49.845 Number of LBA Formats: 1 00:09:49.845 Current LBA Format: LBA Format #00 00:09:49.845 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:49.845 00:09:49.845 11:14:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:09:49.845 EAL: No free 2048 kB hugepages reported on node 1 00:09:50.103 [2024-07-12 11:14:16.123703] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:09:55.364 Initializing NVMe Controllers 00:09:55.364 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:09:55.364 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:09:55.364 Initialization complete. Launching workers. 00:09:55.364 ======================================================== 00:09:55.364 Latency(us) 00:09:55.364 Device Information : IOPS MiB/s Average min max 00:09:55.364 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34385.33 134.32 3721.92 1150.48 8210.07 00:09:55.364 ======================================================== 00:09:55.364 Total : 34385.33 134.32 3721.92 1150.48 8210.07 00:09:55.364 00:09:55.364 [2024-07-12 11:14:21.231288] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:09:55.364 11:14:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:09:55.364 EAL: No free 2048 kB hugepages reported on node 1 00:09:55.364 [2024-07-12 11:14:21.474988] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:00.624 Initializing NVMe Controllers 00:10:00.625 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:00.625 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:00.625 Initialization complete. Launching workers. 00:10:00.625 ======================================================== 00:10:00.625 Latency(us) 00:10:00.625 Device Information : IOPS MiB/s Average min max 00:10:00.625 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31828.99 124.33 4023.43 1184.33 8247.54 00:10:00.625 ======================================================== 00:10:00.625 Total : 31828.99 124.33 4023.43 1184.33 8247.54 00:10:00.625 00:10:00.625 [2024-07-12 11:14:26.497304] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:00.625 11:14:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:00.625 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.625 [2024-07-12 11:14:26.719532] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:05.882 [2024-07-12 11:14:31.860015] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:05.882 Initializing NVMe Controllers 00:10:05.882 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:05.882 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:05.882 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:10:05.882 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:10:05.882 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:10:05.882 Initialization complete. Launching workers. 00:10:05.882 Starting thread on core 2 00:10:05.882 Starting thread on core 3 00:10:05.882 Starting thread on core 1 00:10:05.882 11:14:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:10:05.882 EAL: No free 2048 kB hugepages reported on node 1 00:10:06.140 [2024-07-12 11:14:32.168357] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:09.422 [2024-07-12 11:14:35.230745] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:09.422 Initializing NVMe Controllers 00:10:09.422 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:09.422 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:09.422 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:10:09.422 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:10:09.422 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:10:09.422 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:10:09.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:09.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:09.422 Initialization complete. Launching workers. 00:10:09.422 Starting thread on core 1 with urgent priority queue 00:10:09.422 Starting thread on core 2 with urgent priority queue 00:10:09.422 Starting thread on core 3 with urgent priority queue 00:10:09.422 Starting thread on core 0 with urgent priority queue 00:10:09.422 SPDK bdev Controller (SPDK2 ) core 0: 1047.67 IO/s 95.45 secs/100000 ios 00:10:09.422 SPDK bdev Controller (SPDK2 ) core 1: 1264.33 IO/s 79.09 secs/100000 ios 00:10:09.422 SPDK bdev Controller (SPDK2 ) core 2: 1237.33 IO/s 80.82 secs/100000 ios 00:10:09.422 SPDK bdev Controller (SPDK2 ) core 3: 1247.67 IO/s 80.15 secs/100000 ios 00:10:09.422 ======================================================== 00:10:09.422 00:10:09.422 11:14:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:09.422 EAL: No free 2048 kB hugepages reported on node 1 00:10:09.422 [2024-07-12 11:14:35.530351] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:09.422 Initializing NVMe Controllers 00:10:09.422 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:09.422 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:09.422 Namespace ID: 1 size: 0GB 00:10:09.422 Initialization complete. 00:10:09.422 INFO: using host memory buffer for IO 00:10:09.422 Hello world! 00:10:09.422 [2024-07-12 11:14:35.539397] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:09.679 11:14:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:09.679 EAL: No free 2048 kB hugepages reported on node 1 00:10:09.936 [2024-07-12 11:14:35.826460] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:10.868 Initializing NVMe Controllers 00:10:10.868 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:10.868 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:10.868 Initialization complete. Launching workers. 00:10:10.868 submit (in ns) avg, min, max = 7917.3, 3504.4, 4016572.2 00:10:10.868 complete (in ns) avg, min, max = 25347.2, 2060.0, 4023213.3 00:10:10.868 00:10:10.868 Submit histogram 00:10:10.868 ================ 00:10:10.868 Range in us Cumulative Count 00:10:10.868 3.484 - 3.508: 0.0076% ( 1) 00:10:10.868 3.508 - 3.532: 0.3048% ( 39) 00:10:10.868 3.532 - 3.556: 0.9830% ( 89) 00:10:10.868 3.556 - 3.579: 3.1014% ( 278) 00:10:10.869 3.579 - 3.603: 7.0487% ( 518) 00:10:10.869 3.603 - 3.627: 14.2727% ( 948) 00:10:10.869 3.627 - 3.650: 22.0453% ( 1020) 00:10:10.869 3.650 - 3.674: 30.9533% ( 1169) 00:10:10.869 3.674 - 3.698: 37.4000% ( 846) 00:10:10.869 3.698 - 3.721: 45.2717% ( 1033) 00:10:10.869 3.721 - 3.745: 51.2078% ( 779) 00:10:10.869 3.745 - 3.769: 56.1838% ( 653) 00:10:10.869 3.769 - 3.793: 60.1539% ( 521) 00:10:10.869 3.793 - 3.816: 63.9869% ( 503) 00:10:10.869 3.816 - 3.840: 67.4998% ( 461) 00:10:10.869 3.840 - 3.864: 71.6452% ( 544) 00:10:10.869 3.864 - 3.887: 75.4096% ( 494) 00:10:10.869 3.887 - 3.911: 79.1511% ( 491) 00:10:10.869 3.911 - 3.935: 82.5116% ( 441) 00:10:10.869 3.935 - 3.959: 85.3387% ( 371) 00:10:10.869 3.959 - 3.982: 87.3047% ( 258) 00:10:10.869 3.982 - 4.006: 88.9507% ( 216) 00:10:10.869 4.006 - 4.030: 90.4519% ( 197) 00:10:10.869 4.030 - 4.053: 91.5568% ( 145) 00:10:10.869 4.053 - 4.077: 92.6541% ( 144) 00:10:10.869 4.077 - 4.101: 93.3247% ( 88) 00:10:10.869 4.101 - 4.124: 94.1172% ( 104) 00:10:10.869 4.124 - 4.148: 94.7649% ( 85) 00:10:10.869 4.148 - 4.172: 95.1535% ( 51) 00:10:10.869 4.172 - 4.196: 95.5574% ( 53) 00:10:10.869 4.196 - 4.219: 95.8927% ( 44) 00:10:10.869 4.219 - 4.243: 96.2128% ( 42) 00:10:10.869 4.243 - 4.267: 96.3804% ( 22) 00:10:10.869 4.267 - 4.290: 96.5176% ( 18) 00:10:10.869 4.290 - 4.314: 96.6471% ( 17) 00:10:10.869 4.314 - 4.338: 96.7614% ( 15) 00:10:10.869 4.338 - 4.361: 96.8452% ( 11) 00:10:10.869 4.361 - 4.385: 96.9291% ( 11) 00:10:10.869 4.385 - 4.409: 97.0053% ( 10) 00:10:10.869 4.409 - 4.433: 97.0586% ( 7) 00:10:10.869 4.433 - 4.456: 97.0815% ( 3) 00:10:10.869 4.456 - 4.480: 97.0967% ( 2) 00:10:10.869 4.480 - 4.504: 97.1119% ( 2) 00:10:10.869 4.504 - 4.527: 97.1424% ( 4) 00:10:10.869 4.527 - 4.551: 97.1958% ( 7) 00:10:10.869 4.575 - 4.599: 97.2034% ( 1) 00:10:10.869 4.599 - 4.622: 97.2186% ( 2) 00:10:10.869 4.622 - 4.646: 97.2262% ( 1) 00:10:10.869 4.646 - 4.670: 97.2339% ( 1) 00:10:10.869 4.670 - 4.693: 97.2415% ( 1) 00:10:10.869 4.693 - 4.717: 97.2491% ( 1) 00:10:10.869 4.717 - 4.741: 97.2796% ( 4) 00:10:10.869 4.741 - 4.764: 97.3101% ( 4) 00:10:10.869 4.764 - 4.788: 97.3253% ( 2) 00:10:10.869 4.788 - 4.812: 97.3710% ( 6) 00:10:10.869 4.812 - 4.836: 97.4015% ( 4) 00:10:10.869 4.836 - 4.859: 97.4396% ( 5) 00:10:10.869 4.859 - 4.883: 97.5006% ( 8) 00:10:10.869 4.883 - 4.907: 97.5234% ( 3) 00:10:10.869 4.907 - 4.930: 97.5692% ( 6) 00:10:10.869 4.930 - 4.954: 97.5996% ( 4) 00:10:10.869 4.954 - 4.978: 97.6606% ( 8) 00:10:10.869 4.978 - 5.001: 97.6835% ( 3) 00:10:10.869 5.001 - 5.025: 97.7139% ( 4) 00:10:10.869 5.025 - 5.049: 97.7444% ( 4) 00:10:10.869 5.049 - 5.073: 97.7749% ( 4) 00:10:10.869 5.073 - 5.096: 97.8130% ( 5) 00:10:10.869 5.096 - 5.120: 97.8587% ( 6) 00:10:10.869 5.120 - 5.144: 97.9273% ( 9) 00:10:10.869 5.144 - 5.167: 97.9502% ( 3) 00:10:10.869 5.167 - 5.191: 98.0035% ( 7) 00:10:10.869 5.191 - 5.215: 98.0721% ( 9) 00:10:10.869 5.215 - 5.239: 98.0873% ( 2) 00:10:10.869 5.239 - 5.262: 98.1102% ( 3) 00:10:10.869 5.262 - 5.286: 98.1407% ( 4) 00:10:10.869 5.286 - 5.310: 98.1635% ( 3) 00:10:10.869 5.310 - 5.333: 98.1864% ( 3) 00:10:10.869 5.333 - 5.357: 98.2093% ( 3) 00:10:10.869 5.357 - 5.381: 98.2321% ( 3) 00:10:10.869 5.381 - 5.404: 98.2397% ( 1) 00:10:10.869 5.404 - 5.428: 98.2474% ( 1) 00:10:10.869 5.428 - 5.452: 98.2550% ( 1) 00:10:10.869 5.452 - 5.476: 98.2702% ( 2) 00:10:10.869 5.476 - 5.499: 98.2778% ( 1) 00:10:10.869 5.523 - 5.547: 98.2855% ( 1) 00:10:10.869 5.547 - 5.570: 98.2931% ( 1) 00:10:10.869 5.570 - 5.594: 98.3007% ( 1) 00:10:10.869 5.594 - 5.618: 98.3159% ( 2) 00:10:10.869 5.641 - 5.665: 98.3236% ( 1) 00:10:10.869 5.665 - 5.689: 98.3312% ( 1) 00:10:10.869 5.736 - 5.760: 98.3388% ( 1) 00:10:10.869 5.831 - 5.855: 98.3464% ( 1) 00:10:10.869 5.902 - 5.926: 98.3540% ( 1) 00:10:10.869 5.950 - 5.973: 98.3617% ( 1) 00:10:10.869 5.973 - 5.997: 98.3769% ( 2) 00:10:10.869 6.021 - 6.044: 98.3845% ( 1) 00:10:10.869 6.116 - 6.163: 98.4150% ( 4) 00:10:10.869 6.163 - 6.210: 98.4226% ( 1) 00:10:10.869 6.210 - 6.258: 98.4302% ( 1) 00:10:10.869 6.258 - 6.305: 98.4379% ( 1) 00:10:10.869 6.353 - 6.400: 98.4455% ( 1) 00:10:10.869 6.969 - 7.016: 98.4531% ( 1) 00:10:10.869 7.064 - 7.111: 98.4683% ( 2) 00:10:10.869 7.111 - 7.159: 98.4836% ( 2) 00:10:10.869 7.206 - 7.253: 98.4988% ( 2) 00:10:10.869 7.348 - 7.396: 98.5064% ( 1) 00:10:10.869 7.396 - 7.443: 98.5369% ( 4) 00:10:10.869 7.443 - 7.490: 98.5445% ( 1) 00:10:10.869 7.538 - 7.585: 98.5598% ( 2) 00:10:10.869 7.633 - 7.680: 98.5750% ( 2) 00:10:10.869 7.680 - 7.727: 98.5903% ( 2) 00:10:10.869 7.775 - 7.822: 98.5979% ( 1) 00:10:10.869 7.822 - 7.870: 98.6055% ( 1) 00:10:10.869 7.870 - 7.917: 98.6131% ( 1) 00:10:10.869 7.917 - 7.964: 98.6284% ( 2) 00:10:10.869 7.964 - 8.012: 98.6360% ( 1) 00:10:10.869 8.012 - 8.059: 98.6588% ( 3) 00:10:10.869 8.059 - 8.107: 98.6665% ( 1) 00:10:10.869 8.107 - 8.154: 98.6741% ( 1) 00:10:10.869 8.154 - 8.201: 98.6817% ( 1) 00:10:10.869 8.249 - 8.296: 98.6893% ( 1) 00:10:10.869 8.391 - 8.439: 98.7122% ( 3) 00:10:10.869 8.581 - 8.628: 98.7198% ( 1) 00:10:10.869 8.628 - 8.676: 98.7274% ( 1) 00:10:10.869 8.723 - 8.770: 98.7350% ( 1) 00:10:10.869 8.770 - 8.818: 98.7427% ( 1) 00:10:10.869 8.960 - 9.007: 98.7579% ( 2) 00:10:10.869 9.292 - 9.339: 98.7655% ( 1) 00:10:10.869 9.529 - 9.576: 98.7808% ( 2) 00:10:10.869 9.624 - 9.671: 98.7884% ( 1) 00:10:10.869 9.861 - 9.908: 98.7960% ( 1) 00:10:10.869 9.956 - 10.003: 98.8036% ( 1) 00:10:10.869 10.003 - 10.050: 98.8112% ( 1) 00:10:10.869 10.145 - 10.193: 98.8189% ( 1) 00:10:10.869 10.287 - 10.335: 98.8265% ( 1) 00:10:10.869 10.572 - 10.619: 98.8417% ( 2) 00:10:10.869 10.667 - 10.714: 98.8493% ( 1) 00:10:10.869 10.809 - 10.856: 98.8646% ( 2) 00:10:10.869 10.856 - 10.904: 98.8722% ( 1) 00:10:10.869 11.046 - 11.093: 98.8798% ( 1) 00:10:10.869 11.093 - 11.141: 98.8874% ( 1) 00:10:10.869 11.236 - 11.283: 98.8951% ( 1) 00:10:10.869 11.804 - 11.852: 98.9179% ( 3) 00:10:10.869 11.994 - 12.041: 98.9256% ( 1) 00:10:10.869 12.089 - 12.136: 98.9408% ( 2) 00:10:10.869 12.326 - 12.421: 98.9484% ( 1) 00:10:10.869 12.516 - 12.610: 98.9560% ( 1) 00:10:10.869 12.705 - 12.800: 98.9637% ( 1) 00:10:10.869 12.800 - 12.895: 98.9713% ( 1) 00:10:10.869 12.895 - 12.990: 98.9941% ( 3) 00:10:10.869 13.559 - 13.653: 99.0094% ( 2) 00:10:10.869 13.653 - 13.748: 99.0170% ( 1) 00:10:10.869 13.938 - 14.033: 99.0322% ( 2) 00:10:10.869 14.127 - 14.222: 99.0399% ( 1) 00:10:10.869 14.317 - 14.412: 99.0475% ( 1) 00:10:10.869 14.696 - 14.791: 99.0551% ( 1) 00:10:10.869 14.886 - 14.981: 99.0703% ( 2) 00:10:10.869 17.256 - 17.351: 99.1008% ( 4) 00:10:10.869 17.351 - 17.446: 99.1161% ( 2) 00:10:10.869 17.446 - 17.541: 99.1542% ( 5) 00:10:10.869 17.541 - 17.636: 99.1694% ( 2) 00:10:10.869 17.636 - 17.730: 99.2227% ( 7) 00:10:10.869 17.730 - 17.825: 99.2608% ( 5) 00:10:10.869 17.825 - 17.920: 99.2685% ( 1) 00:10:10.869 17.920 - 18.015: 99.3066% ( 5) 00:10:10.869 18.015 - 18.110: 99.3751% ( 9) 00:10:10.869 18.110 - 18.204: 99.4361% ( 8) 00:10:10.869 18.204 - 18.299: 99.5047% ( 9) 00:10:10.869 18.299 - 18.394: 99.5809% ( 10) 00:10:10.869 18.394 - 18.489: 99.6114% ( 4) 00:10:10.869 18.489 - 18.584: 99.6876% ( 10) 00:10:10.869 18.584 - 18.679: 99.7104% ( 3) 00:10:10.869 18.679 - 18.773: 99.7181% ( 1) 00:10:10.869 18.773 - 18.868: 99.7562% ( 5) 00:10:10.869 18.868 - 18.963: 99.7866% ( 4) 00:10:10.869 18.963 - 19.058: 99.7943% ( 1) 00:10:10.869 19.058 - 19.153: 99.8019% ( 1) 00:10:10.869 19.153 - 19.247: 99.8095% ( 1) 00:10:10.869 19.437 - 19.532: 99.8247% ( 2) 00:10:10.869 19.532 - 19.627: 99.8324% ( 1) 00:10:10.869 19.816 - 19.911: 99.8400% ( 1) 00:10:10.869 20.196 - 20.290: 99.8476% ( 1) 00:10:10.869 20.764 - 20.859: 99.8552% ( 1) 00:10:10.869 20.859 - 20.954: 99.8628% ( 1) 00:10:10.869 21.713 - 21.807: 99.8705% ( 1) 00:10:10.869 22.756 - 22.850: 99.8781% ( 1) 00:10:10.869 23.893 - 23.988: 99.8857% ( 1) 00:10:10.869 24.178 - 24.273: 99.8933% ( 1) 00:10:10.869 31.858 - 32.047: 99.9009% ( 1) 00:10:10.869 3980.705 - 4004.978: 99.9771% ( 10) 00:10:10.869 4004.978 - 4029.250: 100.0000% ( 3) 00:10:10.869 00:10:10.869 Complete histogram 00:10:10.869 ================== 00:10:10.869 Range in us Cumulative Count 00:10:10.869 2.050 - 2.062: 0.1143% ( 15) 00:10:10.869 2.062 - 2.074: 20.8337% ( 2719) 00:10:10.870 2.074 - 2.086: 30.5266% ( 1272) 00:10:10.870 2.086 - 2.098: 35.1368% ( 605) 00:10:10.870 2.098 - 2.110: 54.7131% ( 2569) 00:10:10.870 2.110 - 2.121: 58.5613% ( 505) 00:10:10.870 2.121 - 2.133: 61.2436% ( 352) 00:10:10.870 2.133 - 2.145: 70.1669% ( 1171) 00:10:10.870 2.145 - 2.157: 71.9881% ( 239) 00:10:10.870 2.157 - 2.169: 75.2877% ( 433) 00:10:10.870 2.169 - 2.181: 80.2713% ( 654) 00:10:10.870 2.181 - 2.193: 81.4296% ( 152) 00:10:10.870 2.193 - 2.204: 82.5116% ( 142) 00:10:10.870 2.204 - 2.216: 85.6816% ( 416) 00:10:10.870 2.216 - 2.228: 87.8991% ( 291) 00:10:10.870 2.228 - 2.240: 89.9947% ( 275) 00:10:10.870 2.240 - 2.252: 92.5932% ( 341) 00:10:10.870 2.252 - 2.264: 93.2256% ( 83) 00:10:10.870 2.264 - 2.276: 93.4847% ( 34) 00:10:10.870 2.276 - 2.287: 93.7514% ( 35) 00:10:10.870 2.287 - 2.299: 94.2925% ( 71) 00:10:10.870 2.299 - 2.311: 94.8487% ( 73) 00:10:10.870 2.311 - 2.323: 95.0926% ( 32) 00:10:10.870 2.323 - 2.335: 95.1764% ( 11) 00:10:10.870 2.335 - 2.347: 95.3669% ( 25) 00:10:10.870 2.347 - 2.359: 95.5727% ( 27) 00:10:10.870 2.359 - 2.370: 95.9308% ( 47) 00:10:10.870 2.370 - 2.382: 96.3042% ( 49) 00:10:10.870 2.382 - 2.394: 96.6623% ( 47) 00:10:10.870 2.394 - 2.406: 96.8605% ( 26) 00:10:10.870 2.406 - 2.418: 97.0434% ( 24) 00:10:10.870 2.418 - 2.430: 97.1500% ( 14) 00:10:10.870 2.430 - 2.441: 97.3329% ( 24) 00:10:10.870 2.441 - 2.453: 97.4472% ( 15) 00:10:10.870 2.453 - 2.465: 97.5692% ( 16) 00:10:10.870 2.465 - 2.477: 97.6301% ( 8) 00:10:10.870 2.477 - 2.489: 97.7063% ( 10) 00:10:10.870 2.489 - 2.501: 97.7597% ( 7) 00:10:10.870 2.501 - 2.513: 97.8206% ( 8) 00:10:10.870 2.513 - 2.524: 97.8359% ( 2) 00:10:10.870 2.524 - 2.536: 97.8511% ( 2) 00:10:10.870 2.536 - 2.548: 97.8663% ( 2) 00:10:10.870 2.548 - 2.560: 97.8968% ( 4) 00:10:10.870 2.560 - 2.572: 97.9349% ( 5) 00:10:10.870 2.572 - 2.584: 97.9425% ( 1) 00:10:10.870 2.584 - 2.596: 97.9578% ( 2) 00:10:10.870 2.596 - 2.607: 97.9730% ( 2) 00:10:10.870 2.607 - 2.619: 97.9806% ( 1) 00:10:10.870 2.619 - 2.631: 97.9883% ( 1) 00:10:10.870 2.631 - 2.643: 98.0340% ( 6) 00:10:10.870 2.643 - 2.655: 98.0568% ( 3) 00:10:10.870 2.655 - 2.667: 98.0721% ( 2) 00:10:10.870 2.702 - 2.714: 98.0873% ( 2) 00:10:10.870 2.714 - 2.726: 98.0949% ( 1) 00:10:10.870 2.738 - 2.750: 98.1026% ( 1) 00:10:10.870 2.750 - 2.761: 98.1330% ( 4) 00:10:10.870 2.761 - 2.773: 98.1559% ( 3) 00:10:10.870 2.785 - 2.797: 98.1635% ( 1) 00:10:10.870 2.797 - 2.809: 98.1711% ( 1) 00:10:10.870 2.809 - 2.821: 98.1788% ( 1) 00:10:10.870 2.821 - 2.833: 98.1864% ( 1) 00:10:10.870 2.833 - 2.844: 98.2093% ( 3) 00:10:10.870 2.844 - 2.856: 98.2169% ( 1) 00:10:10.870 2.856 - 2.868: 98.2245% ( 1) 00:10:10.870 2.892 - 2.904: 98.2397% ( 2) 00:10:10.870 2.904 - 2.916: 98.2626% ( 3) 00:10:10.870 2.916 - 2.927: 98.2702% ( 1) 00:10:10.870 2.939 - 2.951: 98.2778% ( 1) 00:10:10.870 2.951 - 2.963: 98.2855% ( 1) 00:10:10.870 2.975 - 2.987: 98.2931% ( 1) 00:10:10.870 3.034 - 3.058: 98.3007% ( 1) 00:10:10.870 3.058 - 3.081: 98.3083% ( 1) 00:10:10.870 3.105 - 3.129: 98.3312% ( 3) 00:10:10.870 3.129 - 3.153: 98.3388% ( 1) 00:10:10.870 3.176 - 3.200: 98.3464% ( 1) 00:10:10.870 3.200 - 3.224: 98.3617% ( 2) 00:10:10.870 3.224 - 3.247: 98.3693% ( 1) 00:10:10.870 3.271 - 3.295: 98.3769% ( 1) 00:10:10.870 3.295 - 3.319: 98.3845% ( 1) 00:10:10.870 3.342 - 3.366: 98.3921% ( 1) 00:10:10.870 3.437 - 3.461: 98.4074% ( 2) 00:10:10.870 3.461 - 3.484: 98.4150% ( 1) 00:10:10.870 3.484 - 3.508: 98.4226% ( 1) 00:10:10.870 3.579 - 3.603: 98.4302% ( 1) 00:10:10.870 3.603 - 3.627: 98.4379% ( 1) 00:10:10.870 3.627 - 3.650: 98.4683% ( 4) 00:10:10.870 3.650 - 3.674: 98.4836% ( 2) 00:10:10.870 3.674 - 3.698: 98.4912% ( 1) 00:10:10.870 3.698 - 3.721: 98.4988% ( 1) 00:10:10.870 3.745 - 3.769: 98.5141% ( 2) 00:10:10.870 3.769 - 3.793: 98.5293% ( 2) 00:10:10.870 3.816 - 3.840: 98.5445% ( 2) 00:10:10.870 3.840 - 3.864: 98.5522% ( 1) 00:10:10.870 3.864 - 3.887: 98.5674% ( 2) 00:10:10.870 3.887 - 3.911: 98.5750% ( 1) 00:10:10.870 3.911 - 3.935: 98.5903% ( 2) 00:10:10.870 3.935 - 3.959: 98.5979% ( 1) 00:10:10.870 3.982 - 4.006: 98.6055% ( 1) 00:10:10.870 4.030 - 4.053: 98.6131% ( 1) 00:10:10.870 4.053 - 4.077: 98.6207% ( 1) 00:10:10.870 4.077 - 4.101: 98.6360% ( 2) 00:10:10.870 4.267 - 4.290: 98.6436% ( 1) 00:10:10.870 4.551 - 4.575: 98.6588% ( 2) 00:10:10.870 4.764 - 4.788: 98.6665% ( 1) 00:10:10.870 5.073 - 5.096: 98.6741% ( 1) 00:10:10.870 5.167 - 5.191: 98.6817% ( 1) 00:10:10.870 5.333 - 5.357: 98.6893% ( 1) 00:10:10.870 5.452 - 5.476: 98.6969% ( 1) 00:10:10.870 5.547 - 5.570: 98.7046% ( 1) 00:10:10.870 5.570 - 5.594: 98.7122% ( 1) 00:10:10.870 5.641 - 5.665: 98.7198% ( 1) 00:10:10.870 5.665 - 5.689: 98.7274% ( 1) 00:10:10.870 5.713 - 5.736: 98.7350% ( 1) 00:10:10.870 5.902 - 5.926: 98.7427% ( 1) 00:10:10.870 6.116 - 6.163: 98.7503% ( 1) 00:10:10.870 6.258 - 6.305: 98.7579% ( 1) 00:10:10.870 6.305 - 6.353: 98.7655% ( 1) 00:10:10.870 6.400 - 6.447: 98.7808% ( 2) 00:10:10.870 6.447 - 6.495: 98.7960% ( 2) 00:10:10.870 6.590 - 6.637: 98.8036% ( 1) 00:10:10.870 6.684 - 6.732: 98.8112% ( 1) 00:10:10.870 7.111 - 7.159: 98.8189% ( 1) 00:10:10.870 7.775 - 7.822: 98.8265% ( 1) 00:10:10.870 8.012 - 8.059: 98.8341% ( 1) 00:10:10.870 9.481 - 9.529: 98.8417% ( 1) 00:10:10.870 10.382 - 10.430: 98.8493% ( 1) 00:10:10.870 11.093 - 11.141: 98.8570% ( 1) 00:10:10.870 13.084 - 13.179: 98.8646% ( 1) 00:10:10.870 15.644 - 15.739: 98.8722% ( 1) 00:10:10.870 15.739 - 15.834: 98.9027% ( 4) 00:10:10.870 15.929 - 16.024: 98.9256% ( 3) 00:10:10.870 16.024 - 16.119: 98.9484% ( 3) 00:10:10.870 16.119 - 16.213: 98.9637% ( 2) 00:10:10.870 16.213 - 16.308: 98.9789% ( 2) 00:10:10.870 16.308 - 16.403: 98.9865% ( 1) 00:10:10.870 16.403 - 16.498: 99.0475% ( 8) 00:10:10.870 16.498 - 16.593: 99.0932% ( 6) 00:10:10.870 16.593 - 16.687: 99.1923% ( 13) 00:10:10.870 16.687 - 16.782: 99.2380% ( 6) 00:10:10.870 16.782 - 16.877: 99.2608% ( 3) 00:10:10.870 16.877 - 16.972: 99.2837% ( 3) 00:10:10.870 16.972 - 17.067: 99.3218% ( 5) 00:10:10.870 17.067 - 17.161: 99.3370% ( 2) 00:10:10.870 17.161 - 17.256: 99.3447% ( 1) 00:10:10.870 17.256 - 17.351: 99.3523% ( 1) 00:10:10.870 17.636 - 17.730: 99.3599% ( 1) 00:10:10.870 17.920 - 18.015: 99.3675% ( 1) 00:10:10.870 18.299 - 18.394: 99.3751%[2024-07-12 11:14:36.927704] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:10.870 ( 1) 00:10:10.870 18.679 - 18.773: 99.3828% ( 1) 00:10:10.870 18.773 - 18.868: 99.3904% ( 1) 00:10:10.870 18.868 - 18.963: 99.4056% ( 2) 00:10:10.870 20.859 - 20.954: 99.4132% ( 1) 00:10:10.870 24.652 - 24.841: 99.4209% ( 1) 00:10:10.870 3422.436 - 3446.708: 99.4285% ( 1) 00:10:10.870 3835.070 - 3859.342: 99.4361% ( 1) 00:10:10.870 3980.705 - 4004.978: 99.8857% ( 59) 00:10:10.870 4004.978 - 4029.250: 100.0000% ( 15) 00:10:10.870 00:10:10.870 11:14:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:10:10.870 11:14:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:10.870 11:14:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:10:10.870 11:14:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:10:10.870 11:14:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:11.128 [ 00:10:11.128 { 00:10:11.128 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:11.128 "subtype": "Discovery", 00:10:11.128 "listen_addresses": [], 00:10:11.128 "allow_any_host": true, 00:10:11.128 "hosts": [] 00:10:11.128 }, 00:10:11.128 { 00:10:11.128 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:11.128 "subtype": "NVMe", 00:10:11.128 "listen_addresses": [ 00:10:11.128 { 00:10:11.128 "trtype": "VFIOUSER", 00:10:11.128 "adrfam": "IPv4", 00:10:11.128 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:11.128 "trsvcid": "0" 00:10:11.128 } 00:10:11.128 ], 00:10:11.128 "allow_any_host": true, 00:10:11.128 "hosts": [], 00:10:11.128 "serial_number": "SPDK1", 00:10:11.128 "model_number": "SPDK bdev Controller", 00:10:11.128 "max_namespaces": 32, 00:10:11.128 "min_cntlid": 1, 00:10:11.128 "max_cntlid": 65519, 00:10:11.128 "namespaces": [ 00:10:11.128 { 00:10:11.128 "nsid": 1, 00:10:11.128 "bdev_name": "Malloc1", 00:10:11.128 "name": "Malloc1", 00:10:11.128 "nguid": "D038460546ED4E4392AD874A09155914", 00:10:11.128 "uuid": "d0384605-46ed-4e43-92ad-874a09155914" 00:10:11.128 }, 00:10:11.128 { 00:10:11.128 "nsid": 2, 00:10:11.128 "bdev_name": "Malloc3", 00:10:11.128 "name": "Malloc3", 00:10:11.128 "nguid": "D2F0230FAB8D4B77A5CC2DAC4E87DFE9", 00:10:11.128 "uuid": "d2f0230f-ab8d-4b77-a5cc-2dac4e87dfe9" 00:10:11.128 } 00:10:11.128 ] 00:10:11.128 }, 00:10:11.128 { 00:10:11.128 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:11.128 "subtype": "NVMe", 00:10:11.128 "listen_addresses": [ 00:10:11.128 { 00:10:11.128 "trtype": "VFIOUSER", 00:10:11.128 "adrfam": "IPv4", 00:10:11.128 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:11.128 "trsvcid": "0" 00:10:11.128 } 00:10:11.128 ], 00:10:11.128 "allow_any_host": true, 00:10:11.128 "hosts": [], 00:10:11.128 "serial_number": "SPDK2", 00:10:11.128 "model_number": "SPDK bdev Controller", 00:10:11.128 "max_namespaces": 32, 00:10:11.128 "min_cntlid": 1, 00:10:11.128 "max_cntlid": 65519, 00:10:11.128 "namespaces": [ 00:10:11.128 { 00:10:11.128 "nsid": 1, 00:10:11.128 "bdev_name": "Malloc2", 00:10:11.128 "name": "Malloc2", 00:10:11.128 "nguid": "2149FD1DCD2643298ED6E73DFF20A51F", 00:10:11.128 "uuid": "2149fd1d-cd26-4329-8ed6-e73dff20a51f" 00:10:11.128 } 00:10:11.128 ] 00:10:11.128 } 00:10:11.128 ] 00:10:11.128 11:14:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:11.128 11:14:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=522363 00:10:11.128 11:14:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:10:11.128 11:14:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:11.128 11:14:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:10:11.128 11:14:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:11.128 11:14:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:11.128 11:14:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:10:11.128 11:14:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:11.128 11:14:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:10:11.385 EAL: No free 2048 kB hugepages reported on node 1 00:10:11.385 [2024-07-12 11:14:37.383382] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:11.385 Malloc4 00:10:11.385 11:14:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:10:11.643 [2024-07-12 11:14:37.732909] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:11.643 11:14:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:11.901 Asynchronous Event Request test 00:10:11.901 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:11.901 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:11.901 Registering asynchronous event callbacks... 00:10:11.901 Starting namespace attribute notice tests for all controllers... 00:10:11.902 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:11.902 aer_cb - Changed Namespace 00:10:11.902 Cleaning up... 00:10:11.902 [ 00:10:11.902 { 00:10:11.902 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:11.902 "subtype": "Discovery", 00:10:11.902 "listen_addresses": [], 00:10:11.902 "allow_any_host": true, 00:10:11.902 "hosts": [] 00:10:11.902 }, 00:10:11.902 { 00:10:11.902 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:11.902 "subtype": "NVMe", 00:10:11.902 "listen_addresses": [ 00:10:11.902 { 00:10:11.902 "trtype": "VFIOUSER", 00:10:11.902 "adrfam": "IPv4", 00:10:11.902 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:11.902 "trsvcid": "0" 00:10:11.902 } 00:10:11.902 ], 00:10:11.902 "allow_any_host": true, 00:10:11.902 "hosts": [], 00:10:11.902 "serial_number": "SPDK1", 00:10:11.902 "model_number": "SPDK bdev Controller", 00:10:11.902 "max_namespaces": 32, 00:10:11.902 "min_cntlid": 1, 00:10:11.902 "max_cntlid": 65519, 00:10:11.902 "namespaces": [ 00:10:11.902 { 00:10:11.902 "nsid": 1, 00:10:11.902 "bdev_name": "Malloc1", 00:10:11.902 "name": "Malloc1", 00:10:11.902 "nguid": "D038460546ED4E4392AD874A09155914", 00:10:11.902 "uuid": "d0384605-46ed-4e43-92ad-874a09155914" 00:10:11.902 }, 00:10:11.902 { 00:10:11.902 "nsid": 2, 00:10:11.902 "bdev_name": "Malloc3", 00:10:11.902 "name": "Malloc3", 00:10:11.902 "nguid": "D2F0230FAB8D4B77A5CC2DAC4E87DFE9", 00:10:11.902 "uuid": "d2f0230f-ab8d-4b77-a5cc-2dac4e87dfe9" 00:10:11.902 } 00:10:11.902 ] 00:10:11.902 }, 00:10:11.902 { 00:10:11.902 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:11.902 "subtype": "NVMe", 00:10:11.902 "listen_addresses": [ 00:10:11.902 { 00:10:11.902 "trtype": "VFIOUSER", 00:10:11.902 "adrfam": "IPv4", 00:10:11.902 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:11.902 "trsvcid": "0" 00:10:11.902 } 00:10:11.902 ], 00:10:11.902 "allow_any_host": true, 00:10:11.902 "hosts": [], 00:10:11.902 "serial_number": "SPDK2", 00:10:11.902 "model_number": "SPDK bdev Controller", 00:10:11.902 "max_namespaces": 32, 00:10:11.902 "min_cntlid": 1, 00:10:11.902 "max_cntlid": 65519, 00:10:11.902 "namespaces": [ 00:10:11.902 { 00:10:11.902 "nsid": 1, 00:10:11.902 "bdev_name": "Malloc2", 00:10:11.902 "name": "Malloc2", 00:10:11.902 "nguid": "2149FD1DCD2643298ED6E73DFF20A51F", 00:10:11.902 "uuid": "2149fd1d-cd26-4329-8ed6-e73dff20a51f" 00:10:11.902 }, 00:10:11.902 { 00:10:11.902 "nsid": 2, 00:10:11.902 "bdev_name": "Malloc4", 00:10:11.902 "name": "Malloc4", 00:10:11.902 "nguid": "3E5060F489EA40AFA8CF282374BADD7C", 00:10:11.902 "uuid": "3e5060f4-89ea-40af-a8cf-282374badd7c" 00:10:11.902 } 00:10:11.902 ] 00:10:11.902 } 00:10:11.902 ] 00:10:11.902 11:14:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 522363 00:10:11.902 11:14:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:10:11.902 11:14:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 516755 00:10:11.902 11:14:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 516755 ']' 00:10:11.902 11:14:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 516755 00:10:11.902 11:14:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:10:11.902 11:14:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:11.902 11:14:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 516755 00:10:11.902 11:14:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:11.902 11:14:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:11.902 11:14:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 516755' 00:10:11.902 killing process with pid 516755 00:10:11.902 11:14:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 516755 00:10:11.902 11:14:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 516755 00:10:12.467 11:14:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:12.467 11:14:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:12.467 11:14:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:10:12.467 11:14:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:10:12.467 11:14:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:10:12.467 11:14:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=522505 00:10:12.467 11:14:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:10:12.467 11:14:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 522505' 00:10:12.467 Process pid: 522505 00:10:12.467 11:14:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:12.467 11:14:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 522505 00:10:12.467 11:14:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 522505 ']' 00:10:12.467 11:14:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.467 11:14:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:12.467 11:14:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.467 11:14:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:12.467 11:14:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:12.467 [2024-07-12 11:14:38.421656] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:10:12.467 [2024-07-12 11:14:38.422687] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:10:12.467 [2024-07-12 11:14:38.422757] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.467 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.467 [2024-07-12 11:14:38.480487] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:12.467 [2024-07-12 11:14:38.579513] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:12.467 [2024-07-12 11:14:38.579566] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:12.467 [2024-07-12 11:14:38.579594] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:12.467 [2024-07-12 11:14:38.579604] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:12.467 [2024-07-12 11:14:38.579614] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:12.467 [2024-07-12 11:14:38.579693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.467 [2024-07-12 11:14:38.579755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:12.467 [2024-07-12 11:14:38.579841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:12.467 [2024-07-12 11:14:38.579844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.726 [2024-07-12 11:14:38.677829] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:10:12.726 [2024-07-12 11:14:38.678083] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:10:12.726 [2024-07-12 11:14:38.678352] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:10:12.726 [2024-07-12 11:14:38.679040] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:10:12.726 [2024-07-12 11:14:38.679289] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:10:12.726 11:14:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:12.726 11:14:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:10:12.726 11:14:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:13.659 11:14:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:10:13.943 11:14:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:13.943 11:14:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:13.943 11:14:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:13.943 11:14:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:13.943 11:14:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:14.241 Malloc1 00:10:14.241 11:14:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:14.499 11:14:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:14.756 11:14:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:15.014 11:14:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:15.014 11:14:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:15.014 11:14:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:15.271 Malloc2 00:10:15.529 11:14:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:15.786 11:14:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:16.043 11:14:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:16.301 11:14:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:10:16.301 11:14:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 522505 00:10:16.301 11:14:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 522505 ']' 00:10:16.301 11:14:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 522505 00:10:16.301 11:14:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:10:16.301 11:14:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:16.301 11:14:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 522505 00:10:16.301 11:14:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:16.301 11:14:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:16.301 11:14:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 522505' 00:10:16.301 killing process with pid 522505 00:10:16.301 11:14:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 522505 00:10:16.301 11:14:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 522505 00:10:16.558 11:14:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:16.559 00:10:16.559 real 0m52.791s 00:10:16.559 user 3m28.467s 00:10:16.559 sys 0m4.457s 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:16.559 ************************************ 00:10:16.559 END TEST nvmf_vfio_user 00:10:16.559 ************************************ 00:10:16.559 11:14:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:16.559 11:14:42 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:16.559 11:14:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:16.559 11:14:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:16.559 11:14:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:16.559 ************************************ 00:10:16.559 START TEST nvmf_vfio_user_nvme_compliance 00:10:16.559 ************************************ 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:16.559 * Looking for test storage... 00:10:16.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=523102 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 523102' 00:10:16.559 Process pid: 523102 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 523102 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 523102 ']' 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:16.559 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:16.817 [2024-07-12 11:14:42.705720] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:10:16.817 [2024-07-12 11:14:42.705806] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.817 EAL: No free 2048 kB hugepages reported on node 1 00:10:16.817 [2024-07-12 11:14:42.765436] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:16.817 [2024-07-12 11:14:42.873668] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:16.818 [2024-07-12 11:14:42.873746] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:16.818 [2024-07-12 11:14:42.873774] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:16.818 [2024-07-12 11:14:42.873786] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:16.818 [2024-07-12 11:14:42.873796] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:16.818 [2024-07-12 11:14:42.873873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:16.818 [2024-07-12 11:14:42.873904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:16.818 [2024-07-12 11:14:42.873907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.074 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:17.074 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:10:17.074 11:14:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:10:18.006 11:14:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:18.006 11:14:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:10:18.006 11:14:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:18.006 11:14:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.006 11:14:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:18.006 11:14:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.006 11:14:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:10:18.006 11:14:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:18.006 11:14:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.006 11:14:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:18.006 malloc0 00:10:18.006 11:14:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.006 11:14:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:10:18.006 11:14:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.006 11:14:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:18.006 11:14:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.006 11:14:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:18.006 11:14:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.006 11:14:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:18.006 11:14:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.006 11:14:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:18.006 11:14:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.006 11:14:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:18.006 11:14:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.006 11:14:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:10:18.006 EAL: No free 2048 kB hugepages reported on node 1 00:10:18.263 00:10:18.263 00:10:18.263 CUnit - A unit testing framework for C - Version 2.1-3 00:10:18.263 http://cunit.sourceforge.net/ 00:10:18.263 00:10:18.263 00:10:18.263 Suite: nvme_compliance 00:10:18.263 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-12 11:14:44.229213] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:18.264 [2024-07-12 11:14:44.230644] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:10:18.264 [2024-07-12 11:14:44.230669] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:10:18.264 [2024-07-12 11:14:44.230682] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:10:18.264 [2024-07-12 11:14:44.233239] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:18.264 passed 00:10:18.264 Test: admin_identify_ctrlr_verify_fused ...[2024-07-12 11:14:44.320805] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:18.264 [2024-07-12 11:14:44.323825] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:18.264 passed 00:10:18.521 Test: admin_identify_ns ...[2024-07-12 11:14:44.417409] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:18.521 [2024-07-12 11:14:44.476899] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:10:18.521 [2024-07-12 11:14:44.484885] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:10:18.521 [2024-07-12 11:14:44.506008] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:18.521 passed 00:10:18.521 Test: admin_get_features_mandatory_features ...[2024-07-12 11:14:44.589662] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:18.521 [2024-07-12 11:14:44.592681] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:18.521 passed 00:10:18.779 Test: admin_get_features_optional_features ...[2024-07-12 11:14:44.680274] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:18.779 [2024-07-12 11:14:44.683297] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:18.779 passed 00:10:18.779 Test: admin_set_features_number_of_queues ...[2024-07-12 11:14:44.771997] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:18.779 [2024-07-12 11:14:44.876978] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:18.779 passed 00:10:19.036 Test: admin_get_log_page_mandatory_logs ...[2024-07-12 11:14:44.961174] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:19.036 [2024-07-12 11:14:44.964204] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:19.036 passed 00:10:19.036 Test: admin_get_log_page_with_lpo ...[2024-07-12 11:14:45.047975] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:19.036 [2024-07-12 11:14:45.115887] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:10:19.036 [2024-07-12 11:14:45.129946] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:19.036 passed 00:10:19.293 Test: fabric_property_get ...[2024-07-12 11:14:45.213709] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:19.293 [2024-07-12 11:14:45.215018] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:10:19.293 [2024-07-12 11:14:45.216730] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:19.293 passed 00:10:19.293 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-12 11:14:45.302274] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:19.293 [2024-07-12 11:14:45.303549] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:10:19.293 [2024-07-12 11:14:45.305297] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:19.293 passed 00:10:19.293 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-12 11:14:45.388418] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:19.550 [2024-07-12 11:14:45.471879] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:19.550 [2024-07-12 11:14:45.487874] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:19.550 [2024-07-12 11:14:45.492981] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:19.550 passed 00:10:19.550 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-12 11:14:45.578610] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:19.550 [2024-07-12 11:14:45.579951] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:10:19.550 [2024-07-12 11:14:45.581636] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:19.550 passed 00:10:19.550 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-12 11:14:45.664898] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:19.808 [2024-07-12 11:14:45.742880] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:19.808 [2024-07-12 11:14:45.766877] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:19.808 [2024-07-12 11:14:45.771985] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:19.808 passed 00:10:19.808 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-12 11:14:45.857760] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:19.808 [2024-07-12 11:14:45.859069] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:10:19.808 [2024-07-12 11:14:45.859110] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:10:19.808 [2024-07-12 11:14:45.860783] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:19.808 passed 00:10:20.065 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-12 11:14:45.944072] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:20.065 [2024-07-12 11:14:46.033891] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:10:20.065 [2024-07-12 11:14:46.041875] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:10:20.065 [2024-07-12 11:14:46.049894] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:10:20.065 [2024-07-12 11:14:46.057878] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:10:20.065 [2024-07-12 11:14:46.087003] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:20.065 passed 00:10:20.066 Test: admin_create_io_sq_verify_pc ...[2024-07-12 11:14:46.173170] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:20.066 [2024-07-12 11:14:46.189892] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:10:20.323 [2024-07-12 11:14:46.207513] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:20.323 passed 00:10:20.323 Test: admin_create_io_qp_max_qps ...[2024-07-12 11:14:46.294121] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:21.694 [2024-07-12 11:14:47.394906] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:10:21.694 [2024-07-12 11:14:47.759441] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:21.694 passed 00:10:21.952 Test: admin_create_io_sq_shared_cq ...[2024-07-12 11:14:47.841296] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:21.952 [2024-07-12 11:14:47.976891] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:21.952 [2024-07-12 11:14:48.013961] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:21.952 passed 00:10:21.952 00:10:21.952 Run Summary: Type Total Ran Passed Failed Inactive 00:10:21.952 suites 1 1 n/a 0 0 00:10:21.952 tests 18 18 18 0 0 00:10:21.952 asserts 360 360 360 0 n/a 00:10:21.952 00:10:21.952 Elapsed time = 1.570 seconds 00:10:21.952 11:14:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 523102 00:10:21.952 11:14:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 523102 ']' 00:10:21.952 11:14:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 523102 00:10:21.952 11:14:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:10:21.952 11:14:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:21.952 11:14:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 523102 00:10:22.210 11:14:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:22.210 11:14:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:22.210 11:14:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 523102' 00:10:22.210 killing process with pid 523102 00:10:22.210 11:14:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 523102 00:10:22.210 11:14:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 523102 00:10:22.468 11:14:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:10:22.468 11:14:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:10:22.468 00:10:22.468 real 0m5.805s 00:10:22.468 user 0m16.233s 00:10:22.468 sys 0m0.544s 00:10:22.468 11:14:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:22.468 11:14:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:22.468 ************************************ 00:10:22.468 END TEST nvmf_vfio_user_nvme_compliance 00:10:22.468 ************************************ 00:10:22.468 11:14:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:22.468 11:14:48 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:22.468 11:14:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:22.468 11:14:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:22.468 11:14:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:22.468 ************************************ 00:10:22.468 START TEST nvmf_vfio_user_fuzz 00:10:22.468 ************************************ 00:10:22.468 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:22.468 * Looking for test storage... 00:10:22.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:22.468 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=523827 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 523827' 00:10:22.469 Process pid: 523827 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 523827 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 523827 ']' 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:22.469 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:22.727 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:22.727 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:10:22.728 11:14:48 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:10:24.098 11:14:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:24.098 11:14:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.098 11:14:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:24.098 11:14:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.098 11:14:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:10:24.098 11:14:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:24.098 11:14:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.098 11:14:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:24.098 malloc0 00:10:24.098 11:14:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.098 11:14:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:10:24.098 11:14:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.098 11:14:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:24.098 11:14:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.098 11:14:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:24.098 11:14:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.098 11:14:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:24.098 11:14:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.098 11:14:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:24.098 11:14:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.098 11:14:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:24.098 11:14:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.098 11:14:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:10:24.098 11:14:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:10:56.168 Fuzzing completed. Shutting down the fuzz application 00:10:56.168 00:10:56.168 Dumping successful admin opcodes: 00:10:56.168 8, 9, 10, 24, 00:10:56.168 Dumping successful io opcodes: 00:10:56.168 0, 00:10:56.168 NS: 0x200003a1ef00 I/O qp, Total commands completed: 636989, total successful commands: 2472, random_seed: 3061571456 00:10:56.168 NS: 0x200003a1ef00 admin qp, Total commands completed: 81085, total successful commands: 645, random_seed: 4054855744 00:10:56.168 11:15:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:10:56.168 11:15:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.168 11:15:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:56.168 11:15:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.168 11:15:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 523827 00:10:56.168 11:15:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 523827 ']' 00:10:56.168 11:15:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 523827 00:10:56.168 11:15:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:10:56.168 11:15:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:56.168 11:15:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 523827 00:10:56.168 11:15:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:56.168 11:15:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:56.168 11:15:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 523827' 00:10:56.168 killing process with pid 523827 00:10:56.168 11:15:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 523827 00:10:56.168 11:15:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 523827 00:10:56.168 11:15:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:10:56.168 11:15:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:10:56.168 00:10:56.168 real 0m32.286s 00:10:56.168 user 0m29.497s 00:10:56.168 sys 0m29.576s 00:10:56.168 11:15:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:56.168 11:15:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:56.168 ************************************ 00:10:56.168 END TEST nvmf_vfio_user_fuzz 00:10:56.168 ************************************ 00:10:56.168 11:15:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:56.168 11:15:20 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:56.168 11:15:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:56.168 11:15:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:56.168 11:15:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:56.168 ************************************ 00:10:56.168 START TEST nvmf_host_management 00:10:56.168 ************************************ 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:56.168 * Looking for test storage... 00:10:56.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:10:56.168 11:15:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:56.736 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:56.736 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:56.736 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:56.736 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:56.736 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:56.994 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:56.994 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:56.994 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:56.994 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:56.994 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:56.994 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:56.994 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:56.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:56.994 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:10:56.994 00:10:56.994 --- 10.0.0.2 ping statistics --- 00:10:56.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.994 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:10:56.994 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:56.994 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:56.994 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:10:56.994 00:10:56.994 --- 10.0.0.1 ping statistics --- 00:10:56.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.994 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:10:56.994 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:56.994 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:10:56.994 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:56.994 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:56.994 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:56.994 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:56.994 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:56.994 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:56.994 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:56.994 11:15:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:10:56.994 11:15:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:10:56.994 11:15:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:10:56.994 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:56.994 11:15:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:56.994 11:15:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:56.994 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=529902 00:10:56.994 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:10:56.994 11:15:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 529902 00:10:56.994 11:15:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 529902 ']' 00:10:56.994 11:15:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.994 11:15:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:56.994 11:15:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.994 11:15:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:56.994 11:15:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:56.994 [2024-07-12 11:15:23.004003] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:10:56.994 [2024-07-12 11:15:23.004072] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.994 EAL: No free 2048 kB hugepages reported on node 1 00:10:56.994 [2024-07-12 11:15:23.065090] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:57.252 [2024-07-12 11:15:23.169816] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:57.252 [2024-07-12 11:15:23.169901] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:57.252 [2024-07-12 11:15:23.169916] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:57.252 [2024-07-12 11:15:23.169934] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:57.252 [2024-07-12 11:15:23.169943] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:57.252 [2024-07-12 11:15:23.170092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:57.252 [2024-07-12 11:15:23.170157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:57.252 [2024-07-12 11:15:23.170225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:57.252 [2024-07-12 11:15:23.170227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.252 11:15:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:57.252 11:15:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:10:57.252 11:15:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:57.252 11:15:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:57.252 11:15:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:57.252 11:15:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:57.252 11:15:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:57.252 11:15:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.252 11:15:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:57.252 [2024-07-12 11:15:23.334820] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:57.252 11:15:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.252 11:15:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:10:57.252 11:15:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:57.252 11:15:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:57.252 11:15:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:57.252 11:15:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:10:57.252 11:15:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:10:57.252 11:15:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.252 11:15:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:57.252 Malloc0 00:10:57.510 [2024-07-12 11:15:23.396279] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:57.510 11:15:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.510 11:15:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:10:57.510 11:15:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:57.510 11:15:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:57.510 11:15:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=529950 00:10:57.510 11:15:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 529950 /var/tmp/bdevperf.sock 00:10:57.510 11:15:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 529950 ']' 00:10:57.510 11:15:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:57.510 11:15:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:10:57.510 11:15:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:10:57.510 11:15:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:57.510 11:15:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:57.510 11:15:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:10:57.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:57.510 11:15:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:57.510 11:15:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:10:57.510 11:15:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:57.510 11:15:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:57.510 11:15:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:57.510 { 00:10:57.510 "params": { 00:10:57.510 "name": "Nvme$subsystem", 00:10:57.510 "trtype": "$TEST_TRANSPORT", 00:10:57.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:57.510 "adrfam": "ipv4", 00:10:57.510 "trsvcid": "$NVMF_PORT", 00:10:57.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:57.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:57.510 "hdgst": ${hdgst:-false}, 00:10:57.510 "ddgst": ${ddgst:-false} 00:10:57.510 }, 00:10:57.510 "method": "bdev_nvme_attach_controller" 00:10:57.510 } 00:10:57.510 EOF 00:10:57.510 )") 00:10:57.510 11:15:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:10:57.510 11:15:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:10:57.510 11:15:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:10:57.510 11:15:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:57.510 "params": { 00:10:57.510 "name": "Nvme0", 00:10:57.510 "trtype": "tcp", 00:10:57.510 "traddr": "10.0.0.2", 00:10:57.510 "adrfam": "ipv4", 00:10:57.510 "trsvcid": "4420", 00:10:57.510 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:57.510 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:57.510 "hdgst": false, 00:10:57.510 "ddgst": false 00:10:57.510 }, 00:10:57.510 "method": "bdev_nvme_attach_controller" 00:10:57.510 }' 00:10:57.510 [2024-07-12 11:15:23.476303] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:10:57.510 [2024-07-12 11:15:23.476389] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid529950 ] 00:10:57.510 EAL: No free 2048 kB hugepages reported on node 1 00:10:57.510 [2024-07-12 11:15:23.537665] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.768 [2024-07-12 11:15:23.648135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.768 Running I/O for 10 seconds... 00:10:58.332 11:15:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:58.332 11:15:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:10:58.332 11:15:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:10:58.332 11:15:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.332 11:15:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:58.332 11:15:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.332 11:15:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:58.332 11:15:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:10:58.332 11:15:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:10:58.332 11:15:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:10:58.332 11:15:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:10:58.332 11:15:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:10:58.332 11:15:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:10:58.332 11:15:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:58.332 11:15:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:58.332 11:15:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:58.332 11:15:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.332 11:15:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:58.332 11:15:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.591 11:15:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=963 00:10:58.591 11:15:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 963 -ge 100 ']' 00:10:58.591 11:15:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:10:58.591 11:15:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:10:58.591 11:15:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:10:58.591 11:15:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:58.591 11:15:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.591 11:15:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:58.591 [2024-07-12 11:15:24.473749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.591 [2024-07-12 11:15:24.473810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.591 [2024-07-12 11:15:24.473828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.591 [2024-07-12 11:15:24.473842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.591 [2024-07-12 11:15:24.473856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.591 [2024-07-12 11:15:24.473877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.591 [2024-07-12 11:15:24.473893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.591 [2024-07-12 11:15:24.473915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.591 [2024-07-12 11:15:24.473929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6a790 is same with the state(5) to be set 00:10:58.591 [2024-07-12 11:15:24.475230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.591 [2024-07-12 11:15:24.475255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.591 [2024-07-12 11:15:24.475283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.591 [2024-07-12 11:15:24.475299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.591 [2024-07-12 11:15:24.475316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.591 [2024-07-12 11:15:24.475330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.591 [2024-07-12 11:15:24.475347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.591 [2024-07-12 11:15:24.475362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.591 [2024-07-12 11:15:24.475379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.591 [2024-07-12 11:15:24.475394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.591 [2024-07-12 11:15:24.475410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.591 [2024-07-12 11:15:24.475428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.591 [2024-07-12 11:15:24.475445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.591 [2024-07-12 11:15:24.475460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.591 [2024-07-12 11:15:24.475476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.591 [2024-07-12 11:15:24.475492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.591 [2024-07-12 11:15:24.475519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.591 [2024-07-12 11:15:24.475536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.591 [2024-07-12 11:15:24.475554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.591 [2024-07-12 11:15:24.475570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.591 [2024-07-12 11:15:24.475587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.591 [2024-07-12 11:15:24.475602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.591 [2024-07-12 11:15:24.475619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.591 [2024-07-12 11:15:24.475635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.591 [2024-07-12 11:15:24.475652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.475668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.475685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.475701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.475718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.475734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.475751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.475767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.475783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.475799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.475816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.475832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.475849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.475872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.475890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.475905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.475923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.475941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.475958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.475974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.475990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.476005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.476024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.476039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.476057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.476073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.476091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.476107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.476123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.476139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.476155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.476179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.476196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.476211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.476228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.476243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.476260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.476275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.476291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.476307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 11:15:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.592 [2024-07-12 11:15:24.476324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.476344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.476365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.476382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.476400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.476416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.476434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.476450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.476467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.476483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 11:15:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:58.592 [2024-07-12 11:15:24.476501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.476516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.476531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.476547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.476563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.476578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.476595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.476612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 11:15:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.592 [2024-07-12 11:15:24.476629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.476645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.476661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.476675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.476691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.476707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.476724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5504 len:128 11:15:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:58.592 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.476746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.476764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.476779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.476796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.476812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.476828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.476844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.476861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.476886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.476904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.476923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.476939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.476953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.476968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.476983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.476998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.477013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.477029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.477043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.477059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.477074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.477090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.477105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.477121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.477135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.477155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.477171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.477188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.477202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.477219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.477237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.477254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.477270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.477287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.477303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.477320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.477335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.477352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:58.592 [2024-07-12 11:15:24.477368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.592 [2024-07-12 11:15:24.477384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137b900 is same with the state(5) to be set 00:10:58.592 [2024-07-12 11:15:24.477465] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x137b900 was disconnected and freed. reset controller. 00:10:58.592 [2024-07-12 11:15:24.478662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:10:58.592 task offset: 8064 on job bdev=Nvme0n1 fails 00:10:58.592 00:10:58.592 Latency(us) 00:10:58.592 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:58.592 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:58.592 Job: Nvme0n1 ended in about 0.65 seconds with error 00:10:58.592 Verification LBA range: start 0x0 length 0x400 00:10:58.592 Nvme0n1 : 0.65 1585.41 99.09 99.09 0.00 37206.94 7039.05 34564.17 00:10:58.592 =================================================================================================================== 00:10:58.592 Total : 1585.41 99.09 99.09 0.00 37206.94 7039.05 34564.17 00:10:58.592 [2024-07-12 11:15:24.480664] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:58.592 [2024-07-12 11:15:24.480693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6a790 (9): Bad file descriptor 00:10:58.592 11:15:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.593 11:15:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:10:58.593 [2024-07-12 11:15:24.492159] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:59.523 11:15:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 529950 00:10:59.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (529950) - No such process 00:10:59.523 11:15:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:10:59.523 11:15:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:59.523 11:15:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:59.523 11:15:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:59.523 11:15:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:10:59.523 11:15:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:10:59.523 11:15:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:59.523 11:15:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:59.523 { 00:10:59.523 "params": { 00:10:59.523 "name": "Nvme$subsystem", 00:10:59.523 "trtype": "$TEST_TRANSPORT", 00:10:59.523 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:59.523 "adrfam": "ipv4", 00:10:59.523 "trsvcid": "$NVMF_PORT", 00:10:59.523 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:59.523 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:59.523 "hdgst": ${hdgst:-false}, 00:10:59.523 "ddgst": ${ddgst:-false} 00:10:59.523 }, 00:10:59.523 "method": "bdev_nvme_attach_controller" 00:10:59.523 } 00:10:59.523 EOF 00:10:59.523 )") 00:10:59.523 11:15:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:10:59.523 11:15:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:10:59.523 11:15:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:10:59.523 11:15:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:59.523 "params": { 00:10:59.523 "name": "Nvme0", 00:10:59.523 "trtype": "tcp", 00:10:59.523 "traddr": "10.0.0.2", 00:10:59.523 "adrfam": "ipv4", 00:10:59.523 "trsvcid": "4420", 00:10:59.523 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:59.523 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:59.523 "hdgst": false, 00:10:59.523 "ddgst": false 00:10:59.523 }, 00:10:59.523 "method": "bdev_nvme_attach_controller" 00:10:59.523 }' 00:10:59.523 [2024-07-12 11:15:25.529577] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:10:59.523 [2024-07-12 11:15:25.529665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid530229 ] 00:10:59.523 EAL: No free 2048 kB hugepages reported on node 1 00:10:59.523 [2024-07-12 11:15:25.590234] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.780 [2024-07-12 11:15:25.699578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.038 Running I/O for 1 seconds... 00:11:00.970 00:11:00.970 Latency(us) 00:11:00.970 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:00.970 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:00.970 Verification LBA range: start 0x0 length 0x400 00:11:00.970 Nvme0n1 : 1.01 1708.94 106.81 0.00 0.00 36827.01 4781.70 33010.73 00:11:00.970 =================================================================================================================== 00:11:00.970 Total : 1708.94 106.81 0.00 0.00 36827.01 4781.70 33010.73 00:11:01.229 11:15:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:01.229 11:15:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:01.229 11:15:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:01.229 11:15:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:01.229 11:15:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:01.229 11:15:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:01.229 11:15:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:11:01.229 11:15:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:01.229 11:15:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:11:01.229 11:15:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:01.229 11:15:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:01.229 rmmod nvme_tcp 00:11:01.229 rmmod nvme_fabrics 00:11:01.230 rmmod nvme_keyring 00:11:01.230 11:15:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:01.230 11:15:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:11:01.230 11:15:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:11:01.230 11:15:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 529902 ']' 00:11:01.230 11:15:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 529902 00:11:01.230 11:15:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 529902 ']' 00:11:01.230 11:15:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 529902 00:11:01.230 11:15:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:11:01.230 11:15:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:01.230 11:15:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 529902 00:11:01.539 11:15:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:01.539 11:15:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:01.539 11:15:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 529902' 00:11:01.539 killing process with pid 529902 00:11:01.539 11:15:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 529902 00:11:01.539 11:15:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 529902 00:11:01.539 [2024-07-12 11:15:27.626789] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:01.823 11:15:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:01.823 11:15:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:01.823 11:15:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:01.823 11:15:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:01.823 11:15:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:01.823 11:15:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.823 11:15:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:01.823 11:15:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.727 11:15:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:03.727 11:15:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:03.727 00:11:03.727 real 0m8.931s 00:11:03.727 user 0m21.158s 00:11:03.727 sys 0m2.629s 00:11:03.727 11:15:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:03.727 11:15:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:03.727 ************************************ 00:11:03.727 END TEST nvmf_host_management 00:11:03.727 ************************************ 00:11:03.727 11:15:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:03.727 11:15:29 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:03.727 11:15:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:03.727 11:15:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:03.727 11:15:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:03.727 ************************************ 00:11:03.727 START TEST nvmf_lvol 00:11:03.727 ************************************ 00:11:03.727 11:15:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:03.727 * Looking for test storage... 00:11:03.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:11:03.728 11:15:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:06.262 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:06.262 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:11:06.262 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:06.262 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:06.262 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:06.262 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:06.262 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:06.262 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:11:06.262 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:06.262 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:11:06.262 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:11:06.262 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:11:06.262 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:11:06.262 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:11:06.262 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:11:06.262 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:06.262 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:06.262 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:06.262 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:06.262 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:06.262 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:06.262 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:06.262 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:06.262 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:06.262 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:06.262 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:06.262 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:06.262 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:06.262 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:06.262 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:06.262 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:06.262 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:06.262 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:06.262 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:06.262 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:06.262 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:06.262 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:06.262 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:06.262 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:06.262 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:06.262 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:06.263 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:06.263 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:06.263 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:06.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:06.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:11:06.263 00:11:06.263 --- 10.0.0.2 ping statistics --- 00:11:06.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.263 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:06.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:06.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:11:06.263 00:11:06.263 --- 10.0.0.1 ping statistics --- 00:11:06.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.263 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:06.263 11:15:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:06.263 11:15:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:06.263 11:15:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:06.263 11:15:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:06.263 11:15:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:06.263 11:15:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=532422 00:11:06.263 11:15:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:06.263 11:15:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 532422 00:11:06.263 11:15:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 532422 ']' 00:11:06.263 11:15:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.263 11:15:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:06.263 11:15:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.263 11:15:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:06.263 11:15:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:06.263 [2024-07-12 11:15:32.070292] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:11:06.263 [2024-07-12 11:15:32.070365] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:06.263 EAL: No free 2048 kB hugepages reported on node 1 00:11:06.263 [2024-07-12 11:15:32.134561] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:06.263 [2024-07-12 11:15:32.241614] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:06.263 [2024-07-12 11:15:32.241704] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:06.263 [2024-07-12 11:15:32.241718] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:06.263 [2024-07-12 11:15:32.241729] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:06.263 [2024-07-12 11:15:32.241738] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:06.263 [2024-07-12 11:15:32.241861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:06.263 [2024-07-12 11:15:32.241924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:06.263 [2024-07-12 11:15:32.241928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.263 11:15:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:06.263 11:15:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:11:06.263 11:15:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:06.263 11:15:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:06.263 11:15:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:06.263 11:15:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:06.263 11:15:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:06.521 [2024-07-12 11:15:32.590885] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:06.521 11:15:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:06.779 11:15:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:06.779 11:15:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:07.038 11:15:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:07.038 11:15:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:07.296 11:15:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:07.553 11:15:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=02497a5b-e0ad-417e-b4ce-73f4028a60d8 00:11:07.553 11:15:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 02497a5b-e0ad-417e-b4ce-73f4028a60d8 lvol 20 00:11:07.811 11:15:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=c49c61d0-a7e7-4614-91b7-fd250c9093ea 00:11:07.811 11:15:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:08.068 11:15:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c49c61d0-a7e7-4614-91b7-fd250c9093ea 00:11:08.325 11:15:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:08.582 [2024-07-12 11:15:34.658524] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:08.582 11:15:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:08.840 11:15:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=532733 00:11:08.840 11:15:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:08.840 11:15:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:08.840 EAL: No free 2048 kB hugepages reported on node 1 00:11:10.214 11:15:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot c49c61d0-a7e7-4614-91b7-fd250c9093ea MY_SNAPSHOT 00:11:10.214 11:15:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=0e3b1b20-d132-46d2-b749-262b47b743f9 00:11:10.214 11:15:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize c49c61d0-a7e7-4614-91b7-fd250c9093ea 30 00:11:10.472 11:15:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 0e3b1b20-d132-46d2-b749-262b47b743f9 MY_CLONE 00:11:10.730 11:15:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=6f762377-0b07-4c13-8679-cf7846c44c68 00:11:10.730 11:15:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 6f762377-0b07-4c13-8679-cf7846c44c68 00:11:11.663 11:15:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 532733 00:11:19.770 Initializing NVMe Controllers 00:11:19.770 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:19.770 Controller IO queue size 128, less than required. 00:11:19.770 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:19.770 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:19.770 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:19.770 Initialization complete. Launching workers. 00:11:19.770 ======================================================== 00:11:19.770 Latency(us) 00:11:19.770 Device Information : IOPS MiB/s Average min max 00:11:19.770 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10692.50 41.77 11971.75 2048.66 61411.36 00:11:19.770 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10797.60 42.18 11859.66 1286.27 72448.26 00:11:19.770 ======================================================== 00:11:19.770 Total : 21490.10 83.95 11915.43 1286.27 72448.26 00:11:19.770 00:11:19.770 11:15:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:19.770 11:15:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c49c61d0-a7e7-4614-91b7-fd250c9093ea 00:11:20.028 11:15:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 02497a5b-e0ad-417e-b4ce-73f4028a60d8 00:11:20.285 11:15:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:20.285 11:15:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:20.285 11:15:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:20.285 11:15:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:20.285 11:15:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:11:20.286 11:15:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:20.286 11:15:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:11:20.286 11:15:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:20.286 11:15:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:20.286 rmmod nvme_tcp 00:11:20.286 rmmod nvme_fabrics 00:11:20.286 rmmod nvme_keyring 00:11:20.286 11:15:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:20.286 11:15:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:11:20.286 11:15:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:11:20.286 11:15:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 532422 ']' 00:11:20.286 11:15:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 532422 00:11:20.286 11:15:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 532422 ']' 00:11:20.286 11:15:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 532422 00:11:20.286 11:15:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:11:20.286 11:15:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:20.286 11:15:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 532422 00:11:20.286 11:15:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:20.286 11:15:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:20.286 11:15:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 532422' 00:11:20.286 killing process with pid 532422 00:11:20.286 11:15:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 532422 00:11:20.286 11:15:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 532422 00:11:20.545 11:15:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:20.545 11:15:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:20.545 11:15:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:20.545 11:15:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:20.545 11:15:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:20.545 11:15:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.545 11:15:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:20.545 11:15:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.085 11:15:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:23.085 00:11:23.085 real 0m18.910s 00:11:23.085 user 1m4.604s 00:11:23.085 sys 0m5.390s 00:11:23.085 11:15:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:23.085 11:15:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:23.085 ************************************ 00:11:23.085 END TEST nvmf_lvol 00:11:23.085 ************************************ 00:11:23.085 11:15:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:23.085 11:15:48 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:23.085 11:15:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:23.085 11:15:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:23.085 11:15:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:23.085 ************************************ 00:11:23.085 START TEST nvmf_lvs_grow 00:11:23.085 ************************************ 00:11:23.085 11:15:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:23.085 * Looking for test storage... 00:11:23.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:11:23.086 11:15:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:24.990 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:24.990 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:24.990 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:24.990 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:24.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:24.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:11:24.990 00:11:24.990 --- 10.0.0.2 ping statistics --- 00:11:24.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.990 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:11:24.990 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:24.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:24.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:11:24.991 00:11:24.991 --- 10.0.0.1 ping statistics --- 00:11:24.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.991 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:11:24.991 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:24.991 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:11:24.991 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:24.991 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:24.991 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:24.991 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:24.991 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:24.991 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:24.991 11:15:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:24.991 11:15:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:11:24.991 11:15:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:24.991 11:15:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:24.991 11:15:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:24.991 11:15:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=536019 00:11:24.991 11:15:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:24.991 11:15:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 536019 00:11:24.991 11:15:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 536019 ']' 00:11:24.991 11:15:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.991 11:15:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:24.991 11:15:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.991 11:15:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:24.991 11:15:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:24.991 [2024-07-12 11:15:51.072152] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:11:24.991 [2024-07-12 11:15:51.072266] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:24.991 EAL: No free 2048 kB hugepages reported on node 1 00:11:25.249 [2024-07-12 11:15:51.138576] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.249 [2024-07-12 11:15:51.248643] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:25.249 [2024-07-12 11:15:51.248706] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:25.249 [2024-07-12 11:15:51.248734] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:25.249 [2024-07-12 11:15:51.248746] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:25.249 [2024-07-12 11:15:51.248756] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:25.249 [2024-07-12 11:15:51.248788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.249 11:15:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:25.249 11:15:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:11:25.249 11:15:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:25.249 11:15:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:25.249 11:15:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:25.506 11:15:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:25.506 11:15:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:25.763 [2024-07-12 11:15:51.660131] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:25.763 11:15:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:11:25.763 11:15:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:25.763 11:15:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:25.763 11:15:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:25.763 ************************************ 00:11:25.763 START TEST lvs_grow_clean 00:11:25.763 ************************************ 00:11:25.763 11:15:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:11:25.763 11:15:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:25.763 11:15:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:25.763 11:15:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:25.763 11:15:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:25.763 11:15:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:25.763 11:15:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:25.763 11:15:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:25.763 11:15:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:25.763 11:15:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:26.021 11:15:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:26.021 11:15:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:26.279 11:15:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=c98d1092-30dc-4dca-adf8-2269fcef26d3 00:11:26.279 11:15:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c98d1092-30dc-4dca-adf8-2269fcef26d3 00:11:26.279 11:15:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:26.536 11:15:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:26.536 11:15:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:26.536 11:15:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c98d1092-30dc-4dca-adf8-2269fcef26d3 lvol 150 00:11:26.793 11:15:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c8c9c6a1-0fcb-48b3-8ed0-fe23300cd55e 00:11:26.793 11:15:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:26.793 11:15:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:27.051 [2024-07-12 11:15:53.001065] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:27.051 [2024-07-12 11:15:53.001159] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:27.051 true 00:11:27.051 11:15:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c98d1092-30dc-4dca-adf8-2269fcef26d3 00:11:27.051 11:15:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:27.310 11:15:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:27.310 11:15:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:27.568 11:15:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c8c9c6a1-0fcb-48b3-8ed0-fe23300cd55e 00:11:27.826 11:15:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:28.084 [2024-07-12 11:15:54.036241] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:28.084 11:15:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:28.342 11:15:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=536436 00:11:28.342 11:15:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:28.342 11:15:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:28.342 11:15:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 536436 /var/tmp/bdevperf.sock 00:11:28.342 11:15:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 536436 ']' 00:11:28.342 11:15:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:28.342 11:15:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:28.342 11:15:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:28.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:28.342 11:15:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:28.342 11:15:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:28.342 [2024-07-12 11:15:54.329074] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:11:28.342 [2024-07-12 11:15:54.329159] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid536436 ] 00:11:28.342 EAL: No free 2048 kB hugepages reported on node 1 00:11:28.342 [2024-07-12 11:15:54.386426] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.600 [2024-07-12 11:15:54.495412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.600 11:15:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:28.600 11:15:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:11:28.600 11:15:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:29.164 Nvme0n1 00:11:29.164 11:15:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:29.422 [ 00:11:29.422 { 00:11:29.422 "name": "Nvme0n1", 00:11:29.422 "aliases": [ 00:11:29.422 "c8c9c6a1-0fcb-48b3-8ed0-fe23300cd55e" 00:11:29.422 ], 00:11:29.422 "product_name": "NVMe disk", 00:11:29.422 "block_size": 4096, 00:11:29.422 "num_blocks": 38912, 00:11:29.422 "uuid": "c8c9c6a1-0fcb-48b3-8ed0-fe23300cd55e", 00:11:29.422 "assigned_rate_limits": { 00:11:29.422 "rw_ios_per_sec": 0, 00:11:29.422 "rw_mbytes_per_sec": 0, 00:11:29.422 "r_mbytes_per_sec": 0, 00:11:29.422 "w_mbytes_per_sec": 0 00:11:29.422 }, 00:11:29.422 "claimed": false, 00:11:29.422 "zoned": false, 00:11:29.422 "supported_io_types": { 00:11:29.423 "read": true, 00:11:29.423 "write": true, 00:11:29.423 "unmap": true, 00:11:29.423 "flush": true, 00:11:29.423 "reset": true, 00:11:29.423 "nvme_admin": true, 00:11:29.423 "nvme_io": true, 00:11:29.423 "nvme_io_md": false, 00:11:29.423 "write_zeroes": true, 00:11:29.423 "zcopy": false, 00:11:29.423 "get_zone_info": false, 00:11:29.423 "zone_management": false, 00:11:29.423 "zone_append": false, 00:11:29.423 "compare": true, 00:11:29.423 "compare_and_write": true, 00:11:29.423 "abort": true, 00:11:29.423 "seek_hole": false, 00:11:29.423 "seek_data": false, 00:11:29.423 "copy": true, 00:11:29.423 "nvme_iov_md": false 00:11:29.423 }, 00:11:29.423 "memory_domains": [ 00:11:29.423 { 00:11:29.423 "dma_device_id": "system", 00:11:29.423 "dma_device_type": 1 00:11:29.423 } 00:11:29.423 ], 00:11:29.423 "driver_specific": { 00:11:29.423 "nvme": [ 00:11:29.423 { 00:11:29.423 "trid": { 00:11:29.423 "trtype": "TCP", 00:11:29.423 "adrfam": "IPv4", 00:11:29.423 "traddr": "10.0.0.2", 00:11:29.423 "trsvcid": "4420", 00:11:29.423 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:29.423 }, 00:11:29.423 "ctrlr_data": { 00:11:29.423 "cntlid": 1, 00:11:29.423 "vendor_id": "0x8086", 00:11:29.423 "model_number": "SPDK bdev Controller", 00:11:29.423 "serial_number": "SPDK0", 00:11:29.423 "firmware_revision": "24.09", 00:11:29.423 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:29.423 "oacs": { 00:11:29.423 "security": 0, 00:11:29.423 "format": 0, 00:11:29.423 "firmware": 0, 00:11:29.423 "ns_manage": 0 00:11:29.423 }, 00:11:29.423 "multi_ctrlr": true, 00:11:29.423 "ana_reporting": false 00:11:29.423 }, 00:11:29.423 "vs": { 00:11:29.423 "nvme_version": "1.3" 00:11:29.423 }, 00:11:29.423 "ns_data": { 00:11:29.423 "id": 1, 00:11:29.423 "can_share": true 00:11:29.423 } 00:11:29.423 } 00:11:29.423 ], 00:11:29.423 "mp_policy": "active_passive" 00:11:29.423 } 00:11:29.423 } 00:11:29.423 ] 00:11:29.423 11:15:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=536572 00:11:29.423 11:15:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:29.423 11:15:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:29.423 Running I/O for 10 seconds... 00:11:30.355 Latency(us) 00:11:30.355 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:30.355 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:30.355 Nvme0n1 : 1.00 15529.00 60.66 0.00 0.00 0.00 0.00 0.00 00:11:30.355 =================================================================================================================== 00:11:30.355 Total : 15529.00 60.66 0.00 0.00 0.00 0.00 0.00 00:11:30.355 00:11:31.367 11:15:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c98d1092-30dc-4dca-adf8-2269fcef26d3 00:11:31.367 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:31.367 Nvme0n1 : 2.00 15765.50 61.58 0.00 0.00 0.00 0.00 0.00 00:11:31.367 =================================================================================================================== 00:11:31.367 Total : 15765.50 61.58 0.00 0.00 0.00 0.00 0.00 00:11:31.367 00:11:31.625 true 00:11:31.625 11:15:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c98d1092-30dc-4dca-adf8-2269fcef26d3 00:11:31.625 11:15:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:31.882 11:15:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:31.882 11:15:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:31.882 11:15:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 536572 00:11:32.447 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:32.447 Nvme0n1 : 3.00 15898.00 62.10 0.00 0.00 0.00 0.00 0.00 00:11:32.447 =================================================================================================================== 00:11:32.447 Total : 15898.00 62.10 0.00 0.00 0.00 0.00 0.00 00:11:32.447 00:11:33.381 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:33.381 Nvme0n1 : 4.00 16019.25 62.58 0.00 0.00 0.00 0.00 0.00 00:11:33.381 =================================================================================================================== 00:11:33.381 Total : 16019.25 62.58 0.00 0.00 0.00 0.00 0.00 00:11:33.381 00:11:34.313 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:34.313 Nvme0n1 : 5.00 16066.60 62.76 0.00 0.00 0.00 0.00 0.00 00:11:34.313 =================================================================================================================== 00:11:34.313 Total : 16066.60 62.76 0.00 0.00 0.00 0.00 0.00 00:11:34.313 00:11:35.687 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:35.687 Nvme0n1 : 6.00 16103.83 62.91 0.00 0.00 0.00 0.00 0.00 00:11:35.688 =================================================================================================================== 00:11:35.688 Total : 16103.83 62.91 0.00 0.00 0.00 0.00 0.00 00:11:35.688 00:11:36.621 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:36.621 Nvme0n1 : 7.00 16126.00 62.99 0.00 0.00 0.00 0.00 0.00 00:11:36.621 =================================================================================================================== 00:11:36.621 Total : 16126.00 62.99 0.00 0.00 0.00 0.00 0.00 00:11:36.621 00:11:37.554 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:37.554 Nvme0n1 : 8.00 16159.62 63.12 0.00 0.00 0.00 0.00 0.00 00:11:37.554 =================================================================================================================== 00:11:37.554 Total : 16159.62 63.12 0.00 0.00 0.00 0.00 0.00 00:11:37.554 00:11:38.487 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:38.487 Nvme0n1 : 9.00 16198.56 63.28 0.00 0.00 0.00 0.00 0.00 00:11:38.487 =================================================================================================================== 00:11:38.487 Total : 16198.56 63.28 0.00 0.00 0.00 0.00 0.00 00:11:38.487 00:11:39.420 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:39.420 Nvme0n1 : 10.00 16217.00 63.35 0.00 0.00 0.00 0.00 0.00 00:11:39.420 =================================================================================================================== 00:11:39.420 Total : 16217.00 63.35 0.00 0.00 0.00 0.00 0.00 00:11:39.420 00:11:39.420 00:11:39.420 Latency(us) 00:11:39.420 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:39.420 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:39.420 Nvme0n1 : 10.00 16222.92 63.37 0.00 0.00 7885.67 3228.25 15631.55 00:11:39.420 =================================================================================================================== 00:11:39.420 Total : 16222.92 63.37 0.00 0.00 7885.67 3228.25 15631.55 00:11:39.420 0 00:11:39.420 11:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 536436 00:11:39.420 11:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 536436 ']' 00:11:39.420 11:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 536436 00:11:39.420 11:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:11:39.420 11:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:39.420 11:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 536436 00:11:39.420 11:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:39.420 11:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:39.420 11:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 536436' 00:11:39.420 killing process with pid 536436 00:11:39.420 11:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 536436 00:11:39.420 Received shutdown signal, test time was about 10.000000 seconds 00:11:39.420 00:11:39.420 Latency(us) 00:11:39.420 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:39.420 =================================================================================================================== 00:11:39.420 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:39.420 11:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 536436 00:11:39.678 11:16:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:39.935 11:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:40.193 11:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c98d1092-30dc-4dca-adf8-2269fcef26d3 00:11:40.193 11:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:40.451 11:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:40.451 11:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:11:40.451 11:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:40.709 [2024-07-12 11:16:06.777553] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:40.709 11:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c98d1092-30dc-4dca-adf8-2269fcef26d3 00:11:40.709 11:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:11:40.709 11:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c98d1092-30dc-4dca-adf8-2269fcef26d3 00:11:40.709 11:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:40.709 11:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:40.709 11:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:40.709 11:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:40.709 11:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:40.709 11:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:40.709 11:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:40.709 11:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:40.709 11:16:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c98d1092-30dc-4dca-adf8-2269fcef26d3 00:11:40.966 request: 00:11:40.966 { 00:11:40.966 "uuid": "c98d1092-30dc-4dca-adf8-2269fcef26d3", 00:11:40.966 "method": "bdev_lvol_get_lvstores", 00:11:40.966 "req_id": 1 00:11:40.966 } 00:11:40.966 Got JSON-RPC error response 00:11:40.966 response: 00:11:40.966 { 00:11:40.966 "code": -19, 00:11:40.966 "message": "No such device" 00:11:40.966 } 00:11:40.966 11:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:11:40.966 11:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:40.966 11:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:40.966 11:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:40.967 11:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:41.223 aio_bdev 00:11:41.223 11:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c8c9c6a1-0fcb-48b3-8ed0-fe23300cd55e 00:11:41.223 11:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=c8c9c6a1-0fcb-48b3-8ed0-fe23300cd55e 00:11:41.223 11:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:41.223 11:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:11:41.223 11:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:41.223 11:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:41.223 11:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:41.481 11:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c8c9c6a1-0fcb-48b3-8ed0-fe23300cd55e -t 2000 00:11:41.738 [ 00:11:41.738 { 00:11:41.738 "name": "c8c9c6a1-0fcb-48b3-8ed0-fe23300cd55e", 00:11:41.738 "aliases": [ 00:11:41.738 "lvs/lvol" 00:11:41.738 ], 00:11:41.738 "product_name": "Logical Volume", 00:11:41.738 "block_size": 4096, 00:11:41.738 "num_blocks": 38912, 00:11:41.738 "uuid": "c8c9c6a1-0fcb-48b3-8ed0-fe23300cd55e", 00:11:41.738 "assigned_rate_limits": { 00:11:41.738 "rw_ios_per_sec": 0, 00:11:41.738 "rw_mbytes_per_sec": 0, 00:11:41.738 "r_mbytes_per_sec": 0, 00:11:41.738 "w_mbytes_per_sec": 0 00:11:41.738 }, 00:11:41.738 "claimed": false, 00:11:41.738 "zoned": false, 00:11:41.738 "supported_io_types": { 00:11:41.738 "read": true, 00:11:41.738 "write": true, 00:11:41.738 "unmap": true, 00:11:41.738 "flush": false, 00:11:41.738 "reset": true, 00:11:41.738 "nvme_admin": false, 00:11:41.738 "nvme_io": false, 00:11:41.738 "nvme_io_md": false, 00:11:41.738 "write_zeroes": true, 00:11:41.738 "zcopy": false, 00:11:41.738 "get_zone_info": false, 00:11:41.738 "zone_management": false, 00:11:41.738 "zone_append": false, 00:11:41.738 "compare": false, 00:11:41.738 "compare_and_write": false, 00:11:41.738 "abort": false, 00:11:41.738 "seek_hole": true, 00:11:41.738 "seek_data": true, 00:11:41.738 "copy": false, 00:11:41.738 "nvme_iov_md": false 00:11:41.738 }, 00:11:41.738 "driver_specific": { 00:11:41.738 "lvol": { 00:11:41.738 "lvol_store_uuid": "c98d1092-30dc-4dca-adf8-2269fcef26d3", 00:11:41.738 "base_bdev": "aio_bdev", 00:11:41.738 "thin_provision": false, 00:11:41.738 "num_allocated_clusters": 38, 00:11:41.738 "snapshot": false, 00:11:41.738 "clone": false, 00:11:41.738 "esnap_clone": false 00:11:41.738 } 00:11:41.738 } 00:11:41.738 } 00:11:41.738 ] 00:11:41.738 11:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:11:41.738 11:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c98d1092-30dc-4dca-adf8-2269fcef26d3 00:11:41.738 11:16:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:41.995 11:16:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:41.995 11:16:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c98d1092-30dc-4dca-adf8-2269fcef26d3 00:11:41.995 11:16:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:42.253 11:16:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:42.253 11:16:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c8c9c6a1-0fcb-48b3-8ed0-fe23300cd55e 00:11:42.510 11:16:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c98d1092-30dc-4dca-adf8-2269fcef26d3 00:11:42.767 11:16:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:43.025 11:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:43.025 00:11:43.025 real 0m17.357s 00:11:43.025 user 0m16.762s 00:11:43.025 sys 0m1.892s 00:11:43.025 11:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:43.025 11:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:43.025 ************************************ 00:11:43.025 END TEST lvs_grow_clean 00:11:43.025 ************************************ 00:11:43.025 11:16:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:11:43.025 11:16:09 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:11:43.025 11:16:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:43.025 11:16:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:43.025 11:16:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:43.025 ************************************ 00:11:43.025 START TEST lvs_grow_dirty 00:11:43.025 ************************************ 00:11:43.025 11:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:11:43.025 11:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:43.025 11:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:43.025 11:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:43.025 11:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:43.025 11:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:43.025 11:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:43.025 11:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:43.025 11:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:43.025 11:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:43.283 11:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:43.283 11:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:43.541 11:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=7921e844-3cff-4932-855c-b3f0bff86946 00:11:43.541 11:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7921e844-3cff-4932-855c-b3f0bff86946 00:11:43.541 11:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:43.799 11:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:43.799 11:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:43.799 11:16:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7921e844-3cff-4932-855c-b3f0bff86946 lvol 150 00:11:44.056 11:16:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=edc7bdbf-82c0-49e0-b03e-6f0ed5d9e279 00:11:44.056 11:16:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:44.057 11:16:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:44.315 [2024-07-12 11:16:10.341025] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:44.315 [2024-07-12 11:16:10.341105] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:44.315 true 00:11:44.315 11:16:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7921e844-3cff-4932-855c-b3f0bff86946 00:11:44.315 11:16:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:44.572 11:16:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:44.572 11:16:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:44.831 11:16:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 edc7bdbf-82c0-49e0-b03e-6f0ed5d9e279 00:11:45.089 11:16:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:45.347 [2024-07-12 11:16:11.307961] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:45.347 11:16:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:45.605 11:16:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=538602 00:11:45.605 11:16:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:45.605 11:16:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:45.605 11:16:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 538602 /var/tmp/bdevperf.sock 00:11:45.605 11:16:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 538602 ']' 00:11:45.605 11:16:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:45.605 11:16:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:45.605 11:16:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:45.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:45.605 11:16:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:45.605 11:16:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:45.605 [2024-07-12 11:16:11.605964] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:11:45.605 [2024-07-12 11:16:11.606040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid538602 ] 00:11:45.605 EAL: No free 2048 kB hugepages reported on node 1 00:11:45.605 [2024-07-12 11:16:11.663095] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.876 [2024-07-12 11:16:11.768912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:45.876 11:16:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:45.876 11:16:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:11:45.876 11:16:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:46.136 Nvme0n1 00:11:46.136 11:16:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:46.393 [ 00:11:46.393 { 00:11:46.393 "name": "Nvme0n1", 00:11:46.393 "aliases": [ 00:11:46.393 "edc7bdbf-82c0-49e0-b03e-6f0ed5d9e279" 00:11:46.393 ], 00:11:46.393 "product_name": "NVMe disk", 00:11:46.393 "block_size": 4096, 00:11:46.393 "num_blocks": 38912, 00:11:46.393 "uuid": "edc7bdbf-82c0-49e0-b03e-6f0ed5d9e279", 00:11:46.393 "assigned_rate_limits": { 00:11:46.393 "rw_ios_per_sec": 0, 00:11:46.393 "rw_mbytes_per_sec": 0, 00:11:46.393 "r_mbytes_per_sec": 0, 00:11:46.393 "w_mbytes_per_sec": 0 00:11:46.393 }, 00:11:46.393 "claimed": false, 00:11:46.393 "zoned": false, 00:11:46.393 "supported_io_types": { 00:11:46.393 "read": true, 00:11:46.393 "write": true, 00:11:46.393 "unmap": true, 00:11:46.393 "flush": true, 00:11:46.393 "reset": true, 00:11:46.393 "nvme_admin": true, 00:11:46.393 "nvme_io": true, 00:11:46.393 "nvme_io_md": false, 00:11:46.393 "write_zeroes": true, 00:11:46.393 "zcopy": false, 00:11:46.393 "get_zone_info": false, 00:11:46.393 "zone_management": false, 00:11:46.393 "zone_append": false, 00:11:46.393 "compare": true, 00:11:46.393 "compare_and_write": true, 00:11:46.393 "abort": true, 00:11:46.393 "seek_hole": false, 00:11:46.393 "seek_data": false, 00:11:46.393 "copy": true, 00:11:46.393 "nvme_iov_md": false 00:11:46.393 }, 00:11:46.393 "memory_domains": [ 00:11:46.393 { 00:11:46.393 "dma_device_id": "system", 00:11:46.393 "dma_device_type": 1 00:11:46.393 } 00:11:46.393 ], 00:11:46.393 "driver_specific": { 00:11:46.393 "nvme": [ 00:11:46.393 { 00:11:46.393 "trid": { 00:11:46.393 "trtype": "TCP", 00:11:46.393 "adrfam": "IPv4", 00:11:46.393 "traddr": "10.0.0.2", 00:11:46.393 "trsvcid": "4420", 00:11:46.393 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:46.393 }, 00:11:46.393 "ctrlr_data": { 00:11:46.393 "cntlid": 1, 00:11:46.393 "vendor_id": "0x8086", 00:11:46.393 "model_number": "SPDK bdev Controller", 00:11:46.393 "serial_number": "SPDK0", 00:11:46.393 "firmware_revision": "24.09", 00:11:46.393 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:46.393 "oacs": { 00:11:46.393 "security": 0, 00:11:46.393 "format": 0, 00:11:46.393 "firmware": 0, 00:11:46.393 "ns_manage": 0 00:11:46.393 }, 00:11:46.393 "multi_ctrlr": true, 00:11:46.393 "ana_reporting": false 00:11:46.393 }, 00:11:46.393 "vs": { 00:11:46.393 "nvme_version": "1.3" 00:11:46.393 }, 00:11:46.393 "ns_data": { 00:11:46.393 "id": 1, 00:11:46.393 "can_share": true 00:11:46.393 } 00:11:46.393 } 00:11:46.393 ], 00:11:46.393 "mp_policy": "active_passive" 00:11:46.393 } 00:11:46.393 } 00:11:46.393 ] 00:11:46.393 11:16:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=538631 00:11:46.393 11:16:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:46.394 11:16:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:46.651 Running I/O for 10 seconds... 00:11:47.597 Latency(us) 00:11:47.597 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:47.597 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:47.597 Nvme0n1 : 1.00 15241.00 59.54 0.00 0.00 0.00 0.00 0.00 00:11:47.597 =================================================================================================================== 00:11:47.597 Total : 15241.00 59.54 0.00 0.00 0.00 0.00 0.00 00:11:47.597 00:11:48.530 11:16:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7921e844-3cff-4932-855c-b3f0bff86946 00:11:48.530 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:48.530 Nvme0n1 : 2.00 15432.50 60.28 0.00 0.00 0.00 0.00 0.00 00:11:48.530 =================================================================================================================== 00:11:48.530 Total : 15432.50 60.28 0.00 0.00 0.00 0.00 0.00 00:11:48.530 00:11:48.788 true 00:11:48.789 11:16:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7921e844-3cff-4932-855c-b3f0bff86946 00:11:48.789 11:16:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:49.047 11:16:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:49.047 11:16:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:49.047 11:16:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 538631 00:11:49.613 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:49.613 Nvme0n1 : 3.00 15518.00 60.62 0.00 0.00 0.00 0.00 0.00 00:11:49.613 =================================================================================================================== 00:11:49.613 Total : 15518.00 60.62 0.00 0.00 0.00 0.00 0.00 00:11:49.613 00:11:50.547 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:50.547 Nvme0n1 : 4.00 15639.00 61.09 0.00 0.00 0.00 0.00 0.00 00:11:50.547 =================================================================================================================== 00:11:50.547 Total : 15639.00 61.09 0.00 0.00 0.00 0.00 0.00 00:11:50.547 00:11:51.481 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:51.481 Nvme0n1 : 5.00 15711.60 61.37 0.00 0.00 0.00 0.00 0.00 00:11:51.481 =================================================================================================================== 00:11:51.481 Total : 15711.60 61.37 0.00 0.00 0.00 0.00 0.00 00:11:51.481 00:11:52.912 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:52.912 Nvme0n1 : 6.00 15749.67 61.52 0.00 0.00 0.00 0.00 0.00 00:11:52.912 =================================================================================================================== 00:11:52.912 Total : 15749.67 61.52 0.00 0.00 0.00 0.00 0.00 00:11:52.912 00:11:53.501 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:53.501 Nvme0n1 : 7.00 15758.71 61.56 0.00 0.00 0.00 0.00 0.00 00:11:53.501 =================================================================================================================== 00:11:53.501 Total : 15758.71 61.56 0.00 0.00 0.00 0.00 0.00 00:11:53.501 00:11:54.874 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:54.874 Nvme0n1 : 8.00 15777.50 61.63 0.00 0.00 0.00 0.00 0.00 00:11:54.874 =================================================================================================================== 00:11:54.874 Total : 15777.50 61.63 0.00 0.00 0.00 0.00 0.00 00:11:54.874 00:11:55.808 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:55.808 Nvme0n1 : 9.00 15830.67 61.84 0.00 0.00 0.00 0.00 0.00 00:11:55.808 =================================================================================================================== 00:11:55.808 Total : 15830.67 61.84 0.00 0.00 0.00 0.00 0.00 00:11:55.808 00:11:56.736 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:56.736 Nvme0n1 : 10.00 15860.50 61.96 0.00 0.00 0.00 0.00 0.00 00:11:56.736 =================================================================================================================== 00:11:56.736 Total : 15860.50 61.96 0.00 0.00 0.00 0.00 0.00 00:11:56.736 00:11:56.736 00:11:56.736 Latency(us) 00:11:56.736 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:56.736 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:56.736 Nvme0n1 : 10.01 15861.32 61.96 0.00 0.00 8065.33 4296.25 17379.18 00:11:56.736 =================================================================================================================== 00:11:56.736 Total : 15861.32 61.96 0.00 0.00 8065.33 4296.25 17379.18 00:11:56.736 0 00:11:56.736 11:16:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 538602 00:11:56.736 11:16:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 538602 ']' 00:11:56.736 11:16:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 538602 00:11:56.736 11:16:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:11:56.736 11:16:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:56.736 11:16:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 538602 00:11:56.736 11:16:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:56.736 11:16:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:56.736 11:16:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 538602' 00:11:56.736 killing process with pid 538602 00:11:56.736 11:16:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 538602 00:11:56.736 Received shutdown signal, test time was about 10.000000 seconds 00:11:56.736 00:11:56.736 Latency(us) 00:11:56.736 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:56.736 =================================================================================================================== 00:11:56.736 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:56.736 11:16:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 538602 00:11:56.992 11:16:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:57.297 11:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:57.553 11:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7921e844-3cff-4932-855c-b3f0bff86946 00:11:57.553 11:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:57.811 11:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:57.811 11:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:11:57.811 11:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 536019 00:11:57.811 11:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 536019 00:11:57.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 536019 Killed "${NVMF_APP[@]}" "$@" 00:11:57.811 11:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:11:57.811 11:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:11:57.811 11:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:57.811 11:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:57.811 11:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:57.811 11:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=539953 00:11:57.811 11:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:57.811 11:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 539953 00:11:57.811 11:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 539953 ']' 00:11:57.811 11:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.811 11:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:57.811 11:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.811 11:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:57.811 11:16:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:57.811 [2024-07-12 11:16:23.788588] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:11:57.811 [2024-07-12 11:16:23.788659] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:57.811 EAL: No free 2048 kB hugepages reported on node 1 00:11:57.811 [2024-07-12 11:16:23.853317] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.068 [2024-07-12 11:16:23.961566] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:58.068 [2024-07-12 11:16:23.961619] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:58.068 [2024-07-12 11:16:23.961633] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:58.068 [2024-07-12 11:16:23.961644] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:58.068 [2024-07-12 11:16:23.961653] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:58.068 [2024-07-12 11:16:23.961678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.068 11:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:58.068 11:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:11:58.068 11:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:58.068 11:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:58.068 11:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:58.068 11:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:58.069 11:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:58.326 [2024-07-12 11:16:24.322286] blobstore.c:4867:bs_recover: *NOTICE*: Performing recovery on blobstore 00:11:58.326 [2024-07-12 11:16:24.322403] blobstore.c:4814:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:11:58.326 [2024-07-12 11:16:24.322448] blobstore.c:4814:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:11:58.326 11:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:11:58.326 11:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev edc7bdbf-82c0-49e0-b03e-6f0ed5d9e279 00:11:58.326 11:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=edc7bdbf-82c0-49e0-b03e-6f0ed5d9e279 00:11:58.326 11:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:58.326 11:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:11:58.326 11:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:58.326 11:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:58.326 11:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:58.583 11:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b edc7bdbf-82c0-49e0-b03e-6f0ed5d9e279 -t 2000 00:11:58.841 [ 00:11:58.841 { 00:11:58.841 "name": "edc7bdbf-82c0-49e0-b03e-6f0ed5d9e279", 00:11:58.841 "aliases": [ 00:11:58.841 "lvs/lvol" 00:11:58.841 ], 00:11:58.841 "product_name": "Logical Volume", 00:11:58.841 "block_size": 4096, 00:11:58.841 "num_blocks": 38912, 00:11:58.841 "uuid": "edc7bdbf-82c0-49e0-b03e-6f0ed5d9e279", 00:11:58.841 "assigned_rate_limits": { 00:11:58.841 "rw_ios_per_sec": 0, 00:11:58.841 "rw_mbytes_per_sec": 0, 00:11:58.841 "r_mbytes_per_sec": 0, 00:11:58.841 "w_mbytes_per_sec": 0 00:11:58.841 }, 00:11:58.841 "claimed": false, 00:11:58.841 "zoned": false, 00:11:58.841 "supported_io_types": { 00:11:58.841 "read": true, 00:11:58.841 "write": true, 00:11:58.841 "unmap": true, 00:11:58.841 "flush": false, 00:11:58.841 "reset": true, 00:11:58.841 "nvme_admin": false, 00:11:58.841 "nvme_io": false, 00:11:58.841 "nvme_io_md": false, 00:11:58.841 "write_zeroes": true, 00:11:58.841 "zcopy": false, 00:11:58.841 "get_zone_info": false, 00:11:58.841 "zone_management": false, 00:11:58.841 "zone_append": false, 00:11:58.841 "compare": false, 00:11:58.841 "compare_and_write": false, 00:11:58.841 "abort": false, 00:11:58.841 "seek_hole": true, 00:11:58.841 "seek_data": true, 00:11:58.841 "copy": false, 00:11:58.841 "nvme_iov_md": false 00:11:58.841 }, 00:11:58.841 "driver_specific": { 00:11:58.841 "lvol": { 00:11:58.841 "lvol_store_uuid": "7921e844-3cff-4932-855c-b3f0bff86946", 00:11:58.841 "base_bdev": "aio_bdev", 00:11:58.841 "thin_provision": false, 00:11:58.841 "num_allocated_clusters": 38, 00:11:58.841 "snapshot": false, 00:11:58.841 "clone": false, 00:11:58.841 "esnap_clone": false 00:11:58.841 } 00:11:58.841 } 00:11:58.841 } 00:11:58.841 ] 00:11:58.841 11:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:11:58.841 11:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7921e844-3cff-4932-855c-b3f0bff86946 00:11:58.841 11:16:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:11:59.099 11:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:11:59.099 11:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7921e844-3cff-4932-855c-b3f0bff86946 00:11:59.099 11:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:11:59.357 11:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:11:59.357 11:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:59.614 [2024-07-12 11:16:25.551698] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:59.614 11:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7921e844-3cff-4932-855c-b3f0bff86946 00:11:59.614 11:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:11:59.614 11:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7921e844-3cff-4932-855c-b3f0bff86946 00:11:59.614 11:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:59.614 11:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:59.614 11:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:59.614 11:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:59.614 11:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:59.614 11:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:59.614 11:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:59.614 11:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:59.614 11:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7921e844-3cff-4932-855c-b3f0bff86946 00:11:59.872 request: 00:11:59.872 { 00:11:59.872 "uuid": "7921e844-3cff-4932-855c-b3f0bff86946", 00:11:59.872 "method": "bdev_lvol_get_lvstores", 00:11:59.872 "req_id": 1 00:11:59.872 } 00:11:59.872 Got JSON-RPC error response 00:11:59.872 response: 00:11:59.872 { 00:11:59.872 "code": -19, 00:11:59.872 "message": "No such device" 00:11:59.872 } 00:11:59.872 11:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:11:59.872 11:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:59.872 11:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:59.872 11:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:59.872 11:16:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:00.130 aio_bdev 00:12:00.130 11:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev edc7bdbf-82c0-49e0-b03e-6f0ed5d9e279 00:12:00.130 11:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=edc7bdbf-82c0-49e0-b03e-6f0ed5d9e279 00:12:00.130 11:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:00.130 11:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:12:00.130 11:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:00.130 11:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:00.130 11:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:00.387 11:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b edc7bdbf-82c0-49e0-b03e-6f0ed5d9e279 -t 2000 00:12:00.669 [ 00:12:00.669 { 00:12:00.669 "name": "edc7bdbf-82c0-49e0-b03e-6f0ed5d9e279", 00:12:00.669 "aliases": [ 00:12:00.669 "lvs/lvol" 00:12:00.669 ], 00:12:00.669 "product_name": "Logical Volume", 00:12:00.669 "block_size": 4096, 00:12:00.669 "num_blocks": 38912, 00:12:00.669 "uuid": "edc7bdbf-82c0-49e0-b03e-6f0ed5d9e279", 00:12:00.669 "assigned_rate_limits": { 00:12:00.669 "rw_ios_per_sec": 0, 00:12:00.669 "rw_mbytes_per_sec": 0, 00:12:00.669 "r_mbytes_per_sec": 0, 00:12:00.669 "w_mbytes_per_sec": 0 00:12:00.669 }, 00:12:00.669 "claimed": false, 00:12:00.669 "zoned": false, 00:12:00.669 "supported_io_types": { 00:12:00.669 "read": true, 00:12:00.669 "write": true, 00:12:00.669 "unmap": true, 00:12:00.669 "flush": false, 00:12:00.669 "reset": true, 00:12:00.669 "nvme_admin": false, 00:12:00.669 "nvme_io": false, 00:12:00.669 "nvme_io_md": false, 00:12:00.669 "write_zeroes": true, 00:12:00.669 "zcopy": false, 00:12:00.669 "get_zone_info": false, 00:12:00.669 "zone_management": false, 00:12:00.669 "zone_append": false, 00:12:00.669 "compare": false, 00:12:00.669 "compare_and_write": false, 00:12:00.669 "abort": false, 00:12:00.669 "seek_hole": true, 00:12:00.669 "seek_data": true, 00:12:00.669 "copy": false, 00:12:00.669 "nvme_iov_md": false 00:12:00.669 }, 00:12:00.669 "driver_specific": { 00:12:00.669 "lvol": { 00:12:00.669 "lvol_store_uuid": "7921e844-3cff-4932-855c-b3f0bff86946", 00:12:00.669 "base_bdev": "aio_bdev", 00:12:00.669 "thin_provision": false, 00:12:00.669 "num_allocated_clusters": 38, 00:12:00.669 "snapshot": false, 00:12:00.669 "clone": false, 00:12:00.669 "esnap_clone": false 00:12:00.669 } 00:12:00.669 } 00:12:00.669 } 00:12:00.669 ] 00:12:00.669 11:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:12:00.669 11:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7921e844-3cff-4932-855c-b3f0bff86946 00:12:00.669 11:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:00.669 11:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:00.669 11:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7921e844-3cff-4932-855c-b3f0bff86946 00:12:00.669 11:16:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:00.926 11:16:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:00.926 11:16:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete edc7bdbf-82c0-49e0-b03e-6f0ed5d9e279 00:12:01.184 11:16:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7921e844-3cff-4932-855c-b3f0bff86946 00:12:01.442 11:16:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:01.700 11:16:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:01.700 00:12:01.700 real 0m18.689s 00:12:01.700 user 0m47.725s 00:12:01.700 sys 0m4.583s 00:12:01.700 11:16:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:01.700 11:16:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:01.700 ************************************ 00:12:01.700 END TEST lvs_grow_dirty 00:12:01.700 ************************************ 00:12:01.700 11:16:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:12:01.700 11:16:27 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:01.700 11:16:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:12:01.700 11:16:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:12:01.700 11:16:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:12:01.700 11:16:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:01.700 11:16:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:12:01.700 11:16:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:12:01.700 11:16:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:12:01.700 11:16:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:01.700 nvmf_trace.0 00:12:01.958 11:16:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:12:01.958 11:16:27 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:01.958 11:16:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:01.958 11:16:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:12:01.958 11:16:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:01.958 11:16:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:12:01.958 11:16:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:01.958 11:16:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:01.958 rmmod nvme_tcp 00:12:01.958 rmmod nvme_fabrics 00:12:01.958 rmmod nvme_keyring 00:12:01.958 11:16:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:01.958 11:16:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:12:01.958 11:16:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:12:01.958 11:16:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 539953 ']' 00:12:01.958 11:16:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 539953 00:12:01.958 11:16:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 539953 ']' 00:12:01.958 11:16:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 539953 00:12:01.958 11:16:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:12:01.958 11:16:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:01.958 11:16:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 539953 00:12:01.958 11:16:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:01.958 11:16:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:01.958 11:16:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 539953' 00:12:01.958 killing process with pid 539953 00:12:01.958 11:16:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 539953 00:12:01.958 11:16:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 539953 00:12:02.216 11:16:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:02.216 11:16:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:02.216 11:16:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:02.216 11:16:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:02.216 11:16:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:02.216 11:16:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.216 11:16:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:02.216 11:16:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.119 11:16:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:04.119 00:12:04.119 real 0m41.528s 00:12:04.119 user 1m10.060s 00:12:04.119 sys 0m8.423s 00:12:04.119 11:16:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:04.119 11:16:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:04.119 ************************************ 00:12:04.119 END TEST nvmf_lvs_grow 00:12:04.119 ************************************ 00:12:04.378 11:16:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:04.378 11:16:30 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:04.378 11:16:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:04.378 11:16:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:04.378 11:16:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:04.378 ************************************ 00:12:04.378 START TEST nvmf_bdev_io_wait 00:12:04.378 ************************************ 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:04.378 * Looking for test storage... 00:12:04.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.378 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:04.379 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:04.379 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:12:04.379 11:16:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:06.912 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:06.912 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:06.912 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:06.912 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:06.912 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:06.912 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:12:06.912 00:12:06.912 --- 10.0.0.2 ping statistics --- 00:12:06.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.912 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:06.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:06.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:12:06.912 00:12:06.912 --- 10.0.0.1 ping statistics --- 00:12:06.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.912 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:06.912 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=542493 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 542493 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 542493 ']' 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:06.913 [2024-07-12 11:16:32.656614] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:12:06.913 [2024-07-12 11:16:32.656697] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:06.913 EAL: No free 2048 kB hugepages reported on node 1 00:12:06.913 [2024-07-12 11:16:32.720190] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:06.913 [2024-07-12 11:16:32.822871] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:06.913 [2024-07-12 11:16:32.822923] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:06.913 [2024-07-12 11:16:32.822951] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:06.913 [2024-07-12 11:16:32.822962] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:06.913 [2024-07-12 11:16:32.822971] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:06.913 [2024-07-12 11:16:32.823027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:06.913 [2024-07-12 11:16:32.823085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:06.913 [2024-07-12 11:16:32.823152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:06.913 [2024-07-12 11:16:32.823155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:06.913 [2024-07-12 11:16:32.953204] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:06.913 Malloc0 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.913 11:16:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:06.913 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.913 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:06.913 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.913 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:06.913 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.913 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:06.913 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.913 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:06.913 [2024-07-12 11:16:33.017400] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:06.913 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.913 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=542632 00:12:06.913 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=542634 00:12:06.913 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:06.913 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:06.913 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:06.913 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:06.913 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:06.913 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:06.913 { 00:12:06.913 "params": { 00:12:06.913 "name": "Nvme$subsystem", 00:12:06.913 "trtype": "$TEST_TRANSPORT", 00:12:06.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:06.913 "adrfam": "ipv4", 00:12:06.913 "trsvcid": "$NVMF_PORT", 00:12:06.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:06.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:06.913 "hdgst": ${hdgst:-false}, 00:12:06.913 "ddgst": ${ddgst:-false} 00:12:06.913 }, 00:12:06.913 "method": "bdev_nvme_attach_controller" 00:12:06.913 } 00:12:06.913 EOF 00:12:06.913 )") 00:12:06.913 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:06.913 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:06.913 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=542636 00:12:06.913 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:06.913 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:06.913 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:06.913 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:06.913 { 00:12:06.913 "params": { 00:12:06.913 "name": "Nvme$subsystem", 00:12:06.913 "trtype": "$TEST_TRANSPORT", 00:12:06.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:06.913 "adrfam": "ipv4", 00:12:06.913 "trsvcid": "$NVMF_PORT", 00:12:06.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:06.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:06.913 "hdgst": ${hdgst:-false}, 00:12:06.913 "ddgst": ${ddgst:-false} 00:12:06.913 }, 00:12:06.913 "method": "bdev_nvme_attach_controller" 00:12:06.913 } 00:12:06.913 EOF 00:12:06.913 )") 00:12:06.913 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:06.913 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:06.913 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=542639 00:12:06.913 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:06.913 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:12:06.913 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:06.913 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:06.913 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:06.913 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:06.913 { 00:12:06.913 "params": { 00:12:06.913 "name": "Nvme$subsystem", 00:12:06.913 "trtype": "$TEST_TRANSPORT", 00:12:06.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:06.913 "adrfam": "ipv4", 00:12:06.913 "trsvcid": "$NVMF_PORT", 00:12:06.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:06.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:06.913 "hdgst": ${hdgst:-false}, 00:12:06.913 "ddgst": ${ddgst:-false} 00:12:06.913 }, 00:12:06.913 "method": "bdev_nvme_attach_controller" 00:12:06.913 } 00:12:06.913 EOF 00:12:06.913 )") 00:12:06.913 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:06.913 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:06.913 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:06.913 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:06.913 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:06.913 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:06.913 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:06.913 { 00:12:06.913 "params": { 00:12:06.914 "name": "Nvme$subsystem", 00:12:06.914 "trtype": "$TEST_TRANSPORT", 00:12:06.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:06.914 "adrfam": "ipv4", 00:12:06.914 "trsvcid": "$NVMF_PORT", 00:12:06.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:06.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:06.914 "hdgst": ${hdgst:-false}, 00:12:06.914 "ddgst": ${ddgst:-false} 00:12:06.914 }, 00:12:06.914 "method": "bdev_nvme_attach_controller" 00:12:06.914 } 00:12:06.914 EOF 00:12:06.914 )") 00:12:06.914 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:06.914 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 542632 00:12:06.914 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:06.914 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:06.914 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:06.914 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:06.914 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:06.914 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:06.914 "params": { 00:12:06.914 "name": "Nvme1", 00:12:06.914 "trtype": "tcp", 00:12:06.914 "traddr": "10.0.0.2", 00:12:06.914 "adrfam": "ipv4", 00:12:06.914 "trsvcid": "4420", 00:12:06.914 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:06.914 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:06.914 "hdgst": false, 00:12:06.914 "ddgst": false 00:12:06.914 }, 00:12:06.914 "method": "bdev_nvme_attach_controller" 00:12:06.914 }' 00:12:06.914 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:06.914 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:06.914 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:06.914 "params": { 00:12:06.914 "name": "Nvme1", 00:12:06.914 "trtype": "tcp", 00:12:06.914 "traddr": "10.0.0.2", 00:12:06.914 "adrfam": "ipv4", 00:12:06.914 "trsvcid": "4420", 00:12:06.914 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:06.914 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:06.914 "hdgst": false, 00:12:06.914 "ddgst": false 00:12:06.914 }, 00:12:06.914 "method": "bdev_nvme_attach_controller" 00:12:06.914 }' 00:12:06.914 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:06.914 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:06.914 "params": { 00:12:06.914 "name": "Nvme1", 00:12:06.914 "trtype": "tcp", 00:12:06.914 "traddr": "10.0.0.2", 00:12:06.914 "adrfam": "ipv4", 00:12:06.914 "trsvcid": "4420", 00:12:06.914 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:06.914 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:06.914 "hdgst": false, 00:12:06.914 "ddgst": false 00:12:06.914 }, 00:12:06.914 "method": "bdev_nvme_attach_controller" 00:12:06.914 }' 00:12:06.914 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:06.914 11:16:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:06.914 "params": { 00:12:06.914 "name": "Nvme1", 00:12:06.914 "trtype": "tcp", 00:12:06.914 "traddr": "10.0.0.2", 00:12:06.914 "adrfam": "ipv4", 00:12:06.914 "trsvcid": "4420", 00:12:06.914 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:06.914 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:06.914 "hdgst": false, 00:12:06.914 "ddgst": false 00:12:06.914 }, 00:12:06.914 "method": "bdev_nvme_attach_controller" 00:12:06.914 }' 00:12:07.172 [2024-07-12 11:16:33.064530] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:12:07.172 [2024-07-12 11:16:33.064536] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:12:07.172 [2024-07-12 11:16:33.064530] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:12:07.172 [2024-07-12 11:16:33.064530] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:12:07.172 [2024-07-12 11:16:33.064615] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-12 11:16:33.064615] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-12 11:16:33.064616] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-12 11:16:33.064616] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:12:07.172 --proc-type=auto ] 00:12:07.172 --proc-type=auto ] 00:12:07.172 --proc-type=auto ] 00:12:07.172 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.172 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.172 [2024-07-12 11:16:33.233814] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.430 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.430 [2024-07-12 11:16:33.331795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:07.430 [2024-07-12 11:16:33.333761] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.430 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.430 [2024-07-12 11:16:33.430519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:07.430 [2024-07-12 11:16:33.432568] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.430 [2024-07-12 11:16:33.528575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:12:07.430 [2024-07-12 11:16:33.532989] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.688 [2024-07-12 11:16:33.625614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:07.688 Running I/O for 1 seconds... 00:12:07.688 Running I/O for 1 seconds... 00:12:07.946 Running I/O for 1 seconds... 00:12:07.946 Running I/O for 1 seconds... 00:12:08.880 00:12:08.880 Latency(us) 00:12:08.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:08.880 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:08.880 Nvme1n1 : 1.02 7298.18 28.51 0.00 0.00 17295.68 8107.05 33204.91 00:12:08.880 =================================================================================================================== 00:12:08.880 Total : 7298.18 28.51 0.00 0.00 17295.68 8107.05 33204.91 00:12:08.880 00:12:08.880 Latency(us) 00:12:08.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:08.880 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:08.880 Nvme1n1 : 1.00 192420.16 751.64 0.00 0.00 662.58 267.00 904.15 00:12:08.881 =================================================================================================================== 00:12:08.881 Total : 192420.16 751.64 0.00 0.00 662.58 267.00 904.15 00:12:08.881 00:12:08.881 Latency(us) 00:12:08.881 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:08.881 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:08.881 Nvme1n1 : 1.01 6753.33 26.38 0.00 0.00 18882.27 6747.78 37088.52 00:12:08.881 =================================================================================================================== 00:12:08.881 Total : 6753.33 26.38 0.00 0.00 18882.27 6747.78 37088.52 00:12:08.881 00:12:08.881 Latency(us) 00:12:08.881 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:08.881 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:08.881 Nvme1n1 : 1.01 9752.29 38.09 0.00 0.00 13067.30 8009.96 25243.50 00:12:08.881 =================================================================================================================== 00:12:08.881 Total : 9752.29 38.09 0.00 0.00 13067.30 8009.96 25243.50 00:12:09.138 11:16:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 542634 00:12:09.138 11:16:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 542636 00:12:09.138 11:16:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 542639 00:12:09.138 11:16:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:09.138 11:16:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.138 11:16:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:09.138 11:16:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.138 11:16:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:09.138 11:16:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:09.138 11:16:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:09.138 11:16:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:12:09.138 11:16:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:09.138 11:16:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:12:09.138 11:16:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:09.138 11:16:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:09.138 rmmod nvme_tcp 00:12:09.138 rmmod nvme_fabrics 00:12:09.138 rmmod nvme_keyring 00:12:09.396 11:16:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:09.396 11:16:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:12:09.396 11:16:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:12:09.396 11:16:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 542493 ']' 00:12:09.396 11:16:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 542493 00:12:09.396 11:16:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 542493 ']' 00:12:09.396 11:16:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 542493 00:12:09.396 11:16:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:12:09.396 11:16:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:09.396 11:16:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 542493 00:12:09.396 11:16:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:09.396 11:16:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:09.396 11:16:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 542493' 00:12:09.396 killing process with pid 542493 00:12:09.396 11:16:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 542493 00:12:09.396 11:16:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 542493 00:12:09.656 11:16:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:09.656 11:16:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:09.656 11:16:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:09.656 11:16:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:09.656 11:16:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:09.656 11:16:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.656 11:16:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:09.656 11:16:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.561 11:16:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:11.561 00:12:11.561 real 0m7.328s 00:12:11.561 user 0m16.935s 00:12:11.561 sys 0m3.564s 00:12:11.561 11:16:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:11.561 11:16:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:11.561 ************************************ 00:12:11.561 END TEST nvmf_bdev_io_wait 00:12:11.561 ************************************ 00:12:11.561 11:16:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:11.561 11:16:37 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:11.561 11:16:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:11.561 11:16:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:11.561 11:16:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:11.561 ************************************ 00:12:11.561 START TEST nvmf_queue_depth 00:12:11.561 ************************************ 00:12:11.561 11:16:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:11.821 * Looking for test storage... 00:12:11.821 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:11.821 11:16:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:11.821 11:16:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:11.821 11:16:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:11.821 11:16:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:11.821 11:16:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:11.821 11:16:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:11.821 11:16:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:11.821 11:16:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:11.821 11:16:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:11.821 11:16:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:11.821 11:16:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:11.821 11:16:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:11.821 11:16:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:11.821 11:16:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:11.821 11:16:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:11.821 11:16:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:11.821 11:16:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:11.821 11:16:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:11.821 11:16:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:11.821 11:16:37 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:11.821 11:16:37 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:11.821 11:16:37 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:11.821 11:16:37 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.821 11:16:37 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.821 11:16:37 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.821 11:16:37 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:11.821 11:16:37 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.822 11:16:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:12:11.822 11:16:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:11.822 11:16:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:11.822 11:16:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:11.822 11:16:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:11.822 11:16:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:11.822 11:16:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:11.822 11:16:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:11.822 11:16:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:11.822 11:16:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:11.822 11:16:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:11.822 11:16:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:11.822 11:16:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:11.822 11:16:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:11.822 11:16:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:11.822 11:16:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:11.822 11:16:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:11.822 11:16:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:11.822 11:16:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.822 11:16:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:11.822 11:16:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.822 11:16:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:11.822 11:16:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:11.822 11:16:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:12:11.822 11:16:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:13.724 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:13.725 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:13.725 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:13.725 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:13.725 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:13.725 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:13.985 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:13.985 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:13.985 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:13.985 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:13.985 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:13.985 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:13.985 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:13.985 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:13.985 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:13.985 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:13.985 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:12:13.985 00:12:13.985 --- 10.0.0.2 ping statistics --- 00:12:13.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:13.985 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:12:13.985 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:13.985 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:13.985 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:12:13.985 00:12:13.985 --- 10.0.0.1 ping statistics --- 00:12:13.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:13.985 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:12:13.985 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:13.985 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:12:13.985 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:13.985 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:13.985 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:13.985 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:13.985 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:13.985 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:13.985 11:16:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:13.985 11:16:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:13.985 11:16:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:13.985 11:16:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:13.985 11:16:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:13.985 11:16:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=544861 00:12:13.985 11:16:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:13.985 11:16:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 544861 00:12:13.985 11:16:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 544861 ']' 00:12:13.985 11:16:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.985 11:16:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:13.985 11:16:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.985 11:16:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:13.985 11:16:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:13.985 [2024-07-12 11:16:40.063298] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:12:13.985 [2024-07-12 11:16:40.063389] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:13.985 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.244 [2024-07-12 11:16:40.131262] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.244 [2024-07-12 11:16:40.243689] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:14.244 [2024-07-12 11:16:40.243750] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:14.244 [2024-07-12 11:16:40.243779] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:14.244 [2024-07-12 11:16:40.243790] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:14.244 [2024-07-12 11:16:40.243801] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:14.244 [2024-07-12 11:16:40.243833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.244 11:16:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:14.244 11:16:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:12:14.244 11:16:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:14.244 11:16:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:14.244 11:16:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:14.504 11:16:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:14.504 11:16:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:14.504 11:16:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.504 11:16:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:14.504 [2024-07-12 11:16:40.386513] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:14.504 11:16:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.504 11:16:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:14.504 11:16:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.504 11:16:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:14.504 Malloc0 00:12:14.504 11:16:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.504 11:16:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:14.504 11:16:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.504 11:16:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:14.504 11:16:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.504 11:16:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:14.504 11:16:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.504 11:16:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:14.504 11:16:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.504 11:16:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:14.504 11:16:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.504 11:16:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:14.504 [2024-07-12 11:16:40.445619] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:14.504 11:16:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.504 11:16:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=544888 00:12:14.504 11:16:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:14.504 11:16:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 544888 /var/tmp/bdevperf.sock 00:12:14.504 11:16:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 544888 ']' 00:12:14.504 11:16:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:14.504 11:16:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:14.504 11:16:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:14.504 11:16:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:14.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:14.504 11:16:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:14.504 11:16:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:14.504 [2024-07-12 11:16:40.496447] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:12:14.504 [2024-07-12 11:16:40.496526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid544888 ] 00:12:14.504 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.504 [2024-07-12 11:16:40.554752] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.762 [2024-07-12 11:16:40.663725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.762 11:16:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:14.762 11:16:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:12:14.762 11:16:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:14.762 11:16:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.762 11:16:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:15.020 NVMe0n1 00:12:15.020 11:16:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.020 11:16:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:15.020 Running I/O for 10 seconds... 00:12:27.310 00:12:27.310 Latency(us) 00:12:27.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:27.310 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:27.310 Verification LBA range: start 0x0 length 0x4000 00:12:27.310 NVMe0n1 : 10.14 8654.47 33.81 0.00 0.00 117307.89 21068.61 83886.08 00:12:27.310 =================================================================================================================== 00:12:27.310 Total : 8654.47 33.81 0.00 0.00 117307.89 21068.61 83886.08 00:12:27.310 0 00:12:27.310 11:16:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 544888 00:12:27.310 11:16:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 544888 ']' 00:12:27.310 11:16:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 544888 00:12:27.310 11:16:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:12:27.310 11:16:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:27.310 11:16:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 544888 00:12:27.310 11:16:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:27.310 11:16:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:27.310 11:16:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 544888' 00:12:27.310 killing process with pid 544888 00:12:27.310 11:16:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 544888 00:12:27.310 Received shutdown signal, test time was about 10.000000 seconds 00:12:27.310 00:12:27.310 Latency(us) 00:12:27.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:27.310 =================================================================================================================== 00:12:27.310 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:27.310 11:16:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 544888 00:12:27.310 11:16:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:27.310 11:16:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:27.310 11:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:27.310 11:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:12:27.310 11:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:27.310 11:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:12:27.310 11:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:27.310 11:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:27.310 rmmod nvme_tcp 00:12:27.310 rmmod nvme_fabrics 00:12:27.310 rmmod nvme_keyring 00:12:27.310 11:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:27.310 11:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:12:27.310 11:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:12:27.310 11:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 544861 ']' 00:12:27.310 11:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 544861 00:12:27.310 11:16:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 544861 ']' 00:12:27.310 11:16:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 544861 00:12:27.310 11:16:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:12:27.310 11:16:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:27.310 11:16:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 544861 00:12:27.311 11:16:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:27.311 11:16:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:27.311 11:16:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 544861' 00:12:27.311 killing process with pid 544861 00:12:27.311 11:16:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 544861 00:12:27.311 11:16:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 544861 00:12:27.311 11:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:27.311 11:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:27.311 11:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:27.311 11:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:27.311 11:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:27.311 11:16:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.311 11:16:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:27.311 11:16:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.879 11:16:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:27.879 00:12:27.879 real 0m16.344s 00:12:27.879 user 0m21.855s 00:12:27.879 sys 0m3.609s 00:12:27.879 11:16:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:27.879 11:16:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:27.879 ************************************ 00:12:27.879 END TEST nvmf_queue_depth 00:12:27.879 ************************************ 00:12:28.138 11:16:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:28.138 11:16:54 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:28.138 11:16:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:28.138 11:16:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:28.138 11:16:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:28.138 ************************************ 00:12:28.138 START TEST nvmf_target_multipath 00:12:28.138 ************************************ 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:28.138 * Looking for test storage... 00:12:28.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:28.138 11:16:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:28.139 11:16:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:28.139 11:16:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.139 11:16:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:28.139 11:16:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.139 11:16:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:28.139 11:16:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:28.139 11:16:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:12:28.139 11:16:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:30.677 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:30.677 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:30.677 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:30.677 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:30.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:30.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:12:30.677 00:12:30.677 --- 10.0.0.2 ping statistics --- 00:12:30.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.677 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:30.677 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:30.677 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:12:30.677 00:12:30.677 --- 10.0.0.1 ping statistics --- 00:12:30.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.677 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:12:30.677 11:16:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:12:30.678 only one NIC for nvmf test 00:12:30.678 11:16:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:12:30.678 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:30.678 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:30.678 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:30.678 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:30.678 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:30.678 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:30.678 rmmod nvme_tcp 00:12:30.678 rmmod nvme_fabrics 00:12:30.678 rmmod nvme_keyring 00:12:30.678 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:30.678 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:30.678 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:30.678 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:30.678 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:30.678 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:30.678 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:30.678 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:30.678 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:30.678 11:16:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.678 11:16:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:30.678 11:16:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.583 11:16:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:32.583 11:16:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:12:32.583 11:16:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:12:32.583 11:16:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:32.583 11:16:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:32.583 11:16:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:32.583 11:16:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:32.583 11:16:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:32.583 11:16:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:32.583 11:16:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:32.583 11:16:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:32.583 11:16:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:32.583 11:16:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:32.583 11:16:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:32.583 11:16:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:32.583 11:16:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:32.583 11:16:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:32.583 11:16:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:32.583 11:16:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.583 11:16:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:32.583 11:16:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.583 11:16:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:32.583 00:12:32.583 real 0m4.394s 00:12:32.583 user 0m0.875s 00:12:32.583 sys 0m1.512s 00:12:32.583 11:16:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:32.583 11:16:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:32.583 ************************************ 00:12:32.583 END TEST nvmf_target_multipath 00:12:32.583 ************************************ 00:12:32.583 11:16:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:32.583 11:16:58 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:32.583 11:16:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:32.583 11:16:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:32.583 11:16:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:32.583 ************************************ 00:12:32.583 START TEST nvmf_zcopy 00:12:32.583 ************************************ 00:12:32.583 11:16:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:32.583 * Looking for test storage... 00:12:32.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:32.583 11:16:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:32.583 11:16:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:12:32.583 11:16:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:32.583 11:16:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:32.583 11:16:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:32.583 11:16:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:32.583 11:16:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:32.583 11:16:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:32.583 11:16:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:32.583 11:16:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:32.583 11:16:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:32.583 11:16:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:32.583 11:16:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:32.583 11:16:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:32.583 11:16:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:32.583 11:16:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:32.583 11:16:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:32.583 11:16:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:32.583 11:16:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:32.583 11:16:58 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:32.583 11:16:58 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:32.583 11:16:58 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:32.583 11:16:58 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.584 11:16:58 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.584 11:16:58 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.584 11:16:58 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:12:32.584 11:16:58 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.584 11:16:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:12:32.584 11:16:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:32.584 11:16:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:32.584 11:16:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:32.584 11:16:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:32.584 11:16:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:32.584 11:16:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:32.584 11:16:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:32.584 11:16:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:32.584 11:16:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:12:32.584 11:16:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:32.584 11:16:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:32.584 11:16:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:32.584 11:16:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:32.584 11:16:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:32.584 11:16:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.584 11:16:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:32.584 11:16:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.584 11:16:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:32.584 11:16:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:32.584 11:16:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:12:32.584 11:16:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:35.110 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:35.110 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:35.110 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:35.111 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:35.111 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:35.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:35.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:12:35.111 00:12:35.111 --- 10.0.0.2 ping statistics --- 00:12:35.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.111 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:35.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:35.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:12:35.111 00:12:35.111 --- 10.0.0.1 ping statistics --- 00:12:35.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.111 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=550070 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 550070 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 550070 ']' 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:35.111 11:17:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:35.111 [2024-07-12 11:17:00.954825] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:12:35.111 [2024-07-12 11:17:00.954924] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:35.111 EAL: No free 2048 kB hugepages reported on node 1 00:12:35.111 [2024-07-12 11:17:01.018702] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.111 [2024-07-12 11:17:01.120934] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:35.111 [2024-07-12 11:17:01.120983] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:35.111 [2024-07-12 11:17:01.120997] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:35.111 [2024-07-12 11:17:01.121009] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:35.111 [2024-07-12 11:17:01.121018] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:35.111 [2024-07-12 11:17:01.121058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:35.111 11:17:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:35.111 11:17:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:12:35.111 11:17:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:35.111 11:17:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:35.111 11:17:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:35.369 11:17:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:35.369 11:17:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:12:35.369 11:17:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:12:35.369 11:17:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.369 11:17:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:35.369 [2024-07-12 11:17:01.269928] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:35.369 11:17:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.369 11:17:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:35.369 11:17:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.369 11:17:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:35.369 11:17:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.369 11:17:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:35.369 11:17:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.369 11:17:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:35.369 [2024-07-12 11:17:01.286123] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:35.369 11:17:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.369 11:17:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:35.369 11:17:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.369 11:17:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:35.369 11:17:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.369 11:17:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:12:35.369 11:17:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.369 11:17:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:35.369 malloc0 00:12:35.369 11:17:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.369 11:17:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:35.369 11:17:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.369 11:17:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:35.369 11:17:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.369 11:17:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:12:35.369 11:17:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:12:35.369 11:17:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:12:35.369 11:17:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:12:35.369 11:17:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:35.369 11:17:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:35.369 { 00:12:35.369 "params": { 00:12:35.369 "name": "Nvme$subsystem", 00:12:35.369 "trtype": "$TEST_TRANSPORT", 00:12:35.369 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:35.369 "adrfam": "ipv4", 00:12:35.369 "trsvcid": "$NVMF_PORT", 00:12:35.369 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:35.369 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:35.369 "hdgst": ${hdgst:-false}, 00:12:35.369 "ddgst": ${ddgst:-false} 00:12:35.369 }, 00:12:35.369 "method": "bdev_nvme_attach_controller" 00:12:35.369 } 00:12:35.369 EOF 00:12:35.369 )") 00:12:35.369 11:17:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:12:35.369 11:17:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:12:35.369 11:17:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:12:35.369 11:17:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:35.369 "params": { 00:12:35.369 "name": "Nvme1", 00:12:35.369 "trtype": "tcp", 00:12:35.369 "traddr": "10.0.0.2", 00:12:35.369 "adrfam": "ipv4", 00:12:35.369 "trsvcid": "4420", 00:12:35.369 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:35.369 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:35.369 "hdgst": false, 00:12:35.369 "ddgst": false 00:12:35.369 }, 00:12:35.369 "method": "bdev_nvme_attach_controller" 00:12:35.369 }' 00:12:35.369 [2024-07-12 11:17:01.367553] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:12:35.369 [2024-07-12 11:17:01.367622] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid550094 ] 00:12:35.369 EAL: No free 2048 kB hugepages reported on node 1 00:12:35.369 [2024-07-12 11:17:01.426064] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.627 [2024-07-12 11:17:01.541941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.884 Running I/O for 10 seconds... 00:12:45.854 00:12:45.854 Latency(us) 00:12:45.854 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:45.854 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:45.854 Verification LBA range: start 0x0 length 0x1000 00:12:45.854 Nvme1n1 : 10.02 5988.34 46.78 0.00 0.00 21315.58 3179.71 30874.74 00:12:45.854 =================================================================================================================== 00:12:45.854 Total : 5988.34 46.78 0.00 0.00 21315.58 3179.71 30874.74 00:12:46.112 11:17:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=551403 00:12:46.112 11:17:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:12:46.112 11:17:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:46.112 11:17:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:46.112 11:17:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:46.112 11:17:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:12:46.112 11:17:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:12:46.112 11:17:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:46.112 11:17:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:46.112 { 00:12:46.112 "params": { 00:12:46.112 "name": "Nvme$subsystem", 00:12:46.112 "trtype": "$TEST_TRANSPORT", 00:12:46.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:46.112 "adrfam": "ipv4", 00:12:46.112 "trsvcid": "$NVMF_PORT", 00:12:46.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:46.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:46.112 "hdgst": ${hdgst:-false}, 00:12:46.112 "ddgst": ${ddgst:-false} 00:12:46.112 }, 00:12:46.112 "method": "bdev_nvme_attach_controller" 00:12:46.112 } 00:12:46.112 EOF 00:12:46.112 )") 00:12:46.112 11:17:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:12:46.112 [2024-07-12 11:17:12.060783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.112 [2024-07-12 11:17:12.060825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.112 11:17:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:12:46.112 11:17:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:12:46.112 11:17:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:46.112 "params": { 00:12:46.112 "name": "Nvme1", 00:12:46.112 "trtype": "tcp", 00:12:46.112 "traddr": "10.0.0.2", 00:12:46.112 "adrfam": "ipv4", 00:12:46.112 "trsvcid": "4420", 00:12:46.112 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:46.112 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:46.112 "hdgst": false, 00:12:46.112 "ddgst": false 00:12:46.112 }, 00:12:46.112 "method": "bdev_nvme_attach_controller" 00:12:46.112 }' 00:12:46.112 [2024-07-12 11:17:12.068730] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.112 [2024-07-12 11:17:12.068752] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.112 [2024-07-12 11:17:12.076751] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.112 [2024-07-12 11:17:12.076771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.112 [2024-07-12 11:17:12.084772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.112 [2024-07-12 11:17:12.084792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.112 [2024-07-12 11:17:12.092796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.112 [2024-07-12 11:17:12.092816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.112 [2024-07-12 11:17:12.095726] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:12:46.112 [2024-07-12 11:17:12.095797] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid551403 ] 00:12:46.112 [2024-07-12 11:17:12.100815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.112 [2024-07-12 11:17:12.100834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.112 [2024-07-12 11:17:12.108837] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.112 [2024-07-12 11:17:12.108877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.112 [2024-07-12 11:17:12.116880] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.112 [2024-07-12 11:17:12.116900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.113 EAL: No free 2048 kB hugepages reported on node 1 00:12:46.113 [2024-07-12 11:17:12.124899] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.113 [2024-07-12 11:17:12.124934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.113 [2024-07-12 11:17:12.132937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.113 [2024-07-12 11:17:12.132959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.113 [2024-07-12 11:17:12.140949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.113 [2024-07-12 11:17:12.140970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.113 [2024-07-12 11:17:12.148970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.113 [2024-07-12 11:17:12.148991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.113 [2024-07-12 11:17:12.155131] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.113 [2024-07-12 11:17:12.156990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.113 [2024-07-12 11:17:12.157010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.113 [2024-07-12 11:17:12.165058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.113 [2024-07-12 11:17:12.165098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.113 [2024-07-12 11:17:12.173033] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.113 [2024-07-12 11:17:12.173054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.113 [2024-07-12 11:17:12.181054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.113 [2024-07-12 11:17:12.181074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.113 [2024-07-12 11:17:12.189075] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.113 [2024-07-12 11:17:12.189095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.113 [2024-07-12 11:17:12.197098] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.113 [2024-07-12 11:17:12.197119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.113 [2024-07-12 11:17:12.205119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.113 [2024-07-12 11:17:12.205139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.113 [2024-07-12 11:17:12.213176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.113 [2024-07-12 11:17:12.213203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.113 [2024-07-12 11:17:12.221212] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.113 [2024-07-12 11:17:12.221247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.113 [2024-07-12 11:17:12.229200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.113 [2024-07-12 11:17:12.229234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.113 [2024-07-12 11:17:12.237255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.113 [2024-07-12 11:17:12.237275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.372 [2024-07-12 11:17:12.245237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.372 [2024-07-12 11:17:12.245261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.372 [2024-07-12 11:17:12.253266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.372 [2024-07-12 11:17:12.253289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.372 [2024-07-12 11:17:12.261271] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.372 [2024-07-12 11:17:12.261291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.372 [2024-07-12 11:17:12.264320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.372 [2024-07-12 11:17:12.269298] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.372 [2024-07-12 11:17:12.269318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.372 [2024-07-12 11:17:12.277327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.372 [2024-07-12 11:17:12.277350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.372 [2024-07-12 11:17:12.285377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.372 [2024-07-12 11:17:12.285414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.372 [2024-07-12 11:17:12.293398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.372 [2024-07-12 11:17:12.293436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.372 [2024-07-12 11:17:12.301425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.372 [2024-07-12 11:17:12.301464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.372 [2024-07-12 11:17:12.309448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.372 [2024-07-12 11:17:12.309491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.372 [2024-07-12 11:17:12.317471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.372 [2024-07-12 11:17:12.317511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.372 [2024-07-12 11:17:12.325486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.372 [2024-07-12 11:17:12.325524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.372 [2024-07-12 11:17:12.333473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.372 [2024-07-12 11:17:12.333495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.372 [2024-07-12 11:17:12.341542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.372 [2024-07-12 11:17:12.341577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.372 [2024-07-12 11:17:12.349559] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.372 [2024-07-12 11:17:12.349599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.372 [2024-07-12 11:17:12.357538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.372 [2024-07-12 11:17:12.357558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.372 [2024-07-12 11:17:12.365559] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.372 [2024-07-12 11:17:12.365579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.372 [2024-07-12 11:17:12.373588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.372 [2024-07-12 11:17:12.373609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.372 [2024-07-12 11:17:12.381610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.372 [2024-07-12 11:17:12.381632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.372 [2024-07-12 11:17:12.389631] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.372 [2024-07-12 11:17:12.389652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.372 [2024-07-12 11:17:12.397680] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.372 [2024-07-12 11:17:12.397702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.372 [2024-07-12 11:17:12.405672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.372 [2024-07-12 11:17:12.405692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.372 [2024-07-12 11:17:12.413695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.372 [2024-07-12 11:17:12.413716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.372 [2024-07-12 11:17:12.421716] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.372 [2024-07-12 11:17:12.421735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.372 [2024-07-12 11:17:12.429739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.372 [2024-07-12 11:17:12.429758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.372 [2024-07-12 11:17:12.437764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.372 [2024-07-12 11:17:12.437783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.372 [2024-07-12 11:17:12.445804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.372 [2024-07-12 11:17:12.445825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.372 [2024-07-12 11:17:12.453826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.372 [2024-07-12 11:17:12.453861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.372 [2024-07-12 11:17:12.461871] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.372 [2024-07-12 11:17:12.461893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.372 [2024-07-12 11:17:12.469892] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.372 [2024-07-12 11:17:12.469928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.372 [2024-07-12 11:17:12.477932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.372 [2024-07-12 11:17:12.477956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.372 [2024-07-12 11:17:12.485936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.372 [2024-07-12 11:17:12.485959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.372 Running I/O for 5 seconds... 00:12:46.372 [2024-07-12 11:17:12.498286] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.372 [2024-07-12 11:17:12.498314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.631 [2024-07-12 11:17:12.508058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.631 [2024-07-12 11:17:12.508088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.631 [2024-07-12 11:17:12.518694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.631 [2024-07-12 11:17:12.518721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.631 [2024-07-12 11:17:12.531264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.631 [2024-07-12 11:17:12.531291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.631 [2024-07-12 11:17:12.540969] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.631 [2024-07-12 11:17:12.540997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.631 [2024-07-12 11:17:12.551565] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.631 [2024-07-12 11:17:12.551592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.631 [2024-07-12 11:17:12.562487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.631 [2024-07-12 11:17:12.562514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.631 [2024-07-12 11:17:12.572944] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.631 [2024-07-12 11:17:12.572972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.631 [2024-07-12 11:17:12.583669] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.631 [2024-07-12 11:17:12.583696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.631 [2024-07-12 11:17:12.596445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.631 [2024-07-12 11:17:12.596471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.631 [2024-07-12 11:17:12.606566] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.631 [2024-07-12 11:17:12.606592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.631 [2024-07-12 11:17:12.616966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.631 [2024-07-12 11:17:12.616993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.631 [2024-07-12 11:17:12.627171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.631 [2024-07-12 11:17:12.627198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.631 [2024-07-12 11:17:12.637916] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.631 [2024-07-12 11:17:12.637952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.631 [2024-07-12 11:17:12.648730] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.631 [2024-07-12 11:17:12.648756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.631 [2024-07-12 11:17:12.661153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.631 [2024-07-12 11:17:12.661180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.631 [2024-07-12 11:17:12.671653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.631 [2024-07-12 11:17:12.671679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.631 [2024-07-12 11:17:12.682272] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.631 [2024-07-12 11:17:12.682314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.631 [2024-07-12 11:17:12.695541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.631 [2024-07-12 11:17:12.695568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.631 [2024-07-12 11:17:12.705161] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.631 [2024-07-12 11:17:12.705187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.631 [2024-07-12 11:17:12.715987] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.631 [2024-07-12 11:17:12.716014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.631 [2024-07-12 11:17:12.726440] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.631 [2024-07-12 11:17:12.726467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.631 [2024-07-12 11:17:12.736762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.631 [2024-07-12 11:17:12.736789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.631 [2024-07-12 11:17:12.747287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.631 [2024-07-12 11:17:12.747313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.631 [2024-07-12 11:17:12.758180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.631 [2024-07-12 11:17:12.758207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.890 [2024-07-12 11:17:12.769407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.890 [2024-07-12 11:17:12.769433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.890 [2024-07-12 11:17:12.782007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.890 [2024-07-12 11:17:12.782035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.890 [2024-07-12 11:17:12.792178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.890 [2024-07-12 11:17:12.792205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.890 [2024-07-12 11:17:12.802355] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.890 [2024-07-12 11:17:12.802396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.890 [2024-07-12 11:17:12.813032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.890 [2024-07-12 11:17:12.813059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.890 [2024-07-12 11:17:12.823700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.890 [2024-07-12 11:17:12.823726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.890 [2024-07-12 11:17:12.834426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.890 [2024-07-12 11:17:12.834459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.890 [2024-07-12 11:17:12.847044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.890 [2024-07-12 11:17:12.847071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.890 [2024-07-12 11:17:12.856992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.890 [2024-07-12 11:17:12.857019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.890 [2024-07-12 11:17:12.867909] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.890 [2024-07-12 11:17:12.867939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.890 [2024-07-12 11:17:12.880370] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.890 [2024-07-12 11:17:12.880397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.890 [2024-07-12 11:17:12.889942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.890 [2024-07-12 11:17:12.889970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.890 [2024-07-12 11:17:12.900284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.890 [2024-07-12 11:17:12.900313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.890 [2024-07-12 11:17:12.910993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.890 [2024-07-12 11:17:12.911021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.890 [2024-07-12 11:17:12.923594] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.890 [2024-07-12 11:17:12.923621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.890 [2024-07-12 11:17:12.935415] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.890 [2024-07-12 11:17:12.935457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.890 [2024-07-12 11:17:12.944416] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.890 [2024-07-12 11:17:12.944458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.890 [2024-07-12 11:17:12.955833] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.890 [2024-07-12 11:17:12.955884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.890 [2024-07-12 11:17:12.968377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.890 [2024-07-12 11:17:12.968403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.890 [2024-07-12 11:17:12.978390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.890 [2024-07-12 11:17:12.978416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.890 [2024-07-12 11:17:12.989040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.890 [2024-07-12 11:17:12.989067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.890 [2024-07-12 11:17:12.999603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.890 [2024-07-12 11:17:12.999644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.890 [2024-07-12 11:17:13.012667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.890 [2024-07-12 11:17:13.012694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.148 [2024-07-12 11:17:13.022724] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.148 [2024-07-12 11:17:13.022753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.148 [2024-07-12 11:17:13.033565] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.148 [2024-07-12 11:17:13.033592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.148 [2024-07-12 11:17:13.046759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.148 [2024-07-12 11:17:13.046794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.148 [2024-07-12 11:17:13.056953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.148 [2024-07-12 11:17:13.056996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.148 [2024-07-12 11:17:13.067325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.148 [2024-07-12 11:17:13.067352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.148 [2024-07-12 11:17:13.077821] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.148 [2024-07-12 11:17:13.077848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.148 [2024-07-12 11:17:13.088352] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.148 [2024-07-12 11:17:13.088379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.148 [2024-07-12 11:17:13.101071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.148 [2024-07-12 11:17:13.101097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.148 [2024-07-12 11:17:13.110956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.148 [2024-07-12 11:17:13.110982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.148 [2024-07-12 11:17:13.121444] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.148 [2024-07-12 11:17:13.121471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.148 [2024-07-12 11:17:13.132395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.148 [2024-07-12 11:17:13.132421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.148 [2024-07-12 11:17:13.145826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.148 [2024-07-12 11:17:13.145875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.148 [2024-07-12 11:17:13.156410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.148 [2024-07-12 11:17:13.156436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.148 [2024-07-12 11:17:13.167190] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.148 [2024-07-12 11:17:13.167218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.148 [2024-07-12 11:17:13.178068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.148 [2024-07-12 11:17:13.178095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.148 [2024-07-12 11:17:13.188771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.148 [2024-07-12 11:17:13.188797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.148 [2024-07-12 11:17:13.201176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.148 [2024-07-12 11:17:13.201203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.148 [2024-07-12 11:17:13.211277] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.148 [2024-07-12 11:17:13.211304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.148 [2024-07-12 11:17:13.222010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.148 [2024-07-12 11:17:13.222036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.148 [2024-07-12 11:17:13.232697] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.148 [2024-07-12 11:17:13.232722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.148 [2024-07-12 11:17:13.245468] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.148 [2024-07-12 11:17:13.245494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.148 [2024-07-12 11:17:13.255734] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.148 [2024-07-12 11:17:13.255768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.148 [2024-07-12 11:17:13.266014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.148 [2024-07-12 11:17:13.266041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.148 [2024-07-12 11:17:13.276411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.148 [2024-07-12 11:17:13.276441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.406 [2024-07-12 11:17:13.287657] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.407 [2024-07-12 11:17:13.287686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.407 [2024-07-12 11:17:13.300118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.407 [2024-07-12 11:17:13.300145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.407 [2024-07-12 11:17:13.310194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.407 [2024-07-12 11:17:13.310221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.407 [2024-07-12 11:17:13.320699] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.407 [2024-07-12 11:17:13.320725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.407 [2024-07-12 11:17:13.331379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.407 [2024-07-12 11:17:13.331406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.407 [2024-07-12 11:17:13.343790] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.407 [2024-07-12 11:17:13.343817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.407 [2024-07-12 11:17:13.353982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.407 [2024-07-12 11:17:13.354010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.407 [2024-07-12 11:17:13.364564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.407 [2024-07-12 11:17:13.364591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.407 [2024-07-12 11:17:13.376623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.407 [2024-07-12 11:17:13.376649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.407 [2024-07-12 11:17:13.386740] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.407 [2024-07-12 11:17:13.386766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.407 [2024-07-12 11:17:13.397532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.407 [2024-07-12 11:17:13.397559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.407 [2024-07-12 11:17:13.407794] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.407 [2024-07-12 11:17:13.407820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.407 [2024-07-12 11:17:13.418029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.407 [2024-07-12 11:17:13.418057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.407 [2024-07-12 11:17:13.428810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.407 [2024-07-12 11:17:13.428837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.407 [2024-07-12 11:17:13.439451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.407 [2024-07-12 11:17:13.439477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.407 [2024-07-12 11:17:13.452898] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.407 [2024-07-12 11:17:13.452925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.407 [2024-07-12 11:17:13.462972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.407 [2024-07-12 11:17:13.463005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.407 [2024-07-12 11:17:13.473419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.407 [2024-07-12 11:17:13.473446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.407 [2024-07-12 11:17:13.484640] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.407 [2024-07-12 11:17:13.484668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.407 [2024-07-12 11:17:13.497182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.407 [2024-07-12 11:17:13.497209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.407 [2024-07-12 11:17:13.507803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.407 [2024-07-12 11:17:13.507830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.407 [2024-07-12 11:17:13.518446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.407 [2024-07-12 11:17:13.518472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.407 [2024-07-12 11:17:13.531537] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.407 [2024-07-12 11:17:13.531563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.666 [2024-07-12 11:17:13.541939] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.666 [2024-07-12 11:17:13.541967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.666 [2024-07-12 11:17:13.552729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.666 [2024-07-12 11:17:13.552756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.666 [2024-07-12 11:17:13.563264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.666 [2024-07-12 11:17:13.563290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.666 [2024-07-12 11:17:13.573704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.666 [2024-07-12 11:17:13.573731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.666 [2024-07-12 11:17:13.584846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.666 [2024-07-12 11:17:13.584890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.666 [2024-07-12 11:17:13.595315] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.666 [2024-07-12 11:17:13.595342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.666 [2024-07-12 11:17:13.608064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.666 [2024-07-12 11:17:13.608091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.666 [2024-07-12 11:17:13.619592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.666 [2024-07-12 11:17:13.619619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.666 [2024-07-12 11:17:13.628574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.666 [2024-07-12 11:17:13.628600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.666 [2024-07-12 11:17:13.639797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.666 [2024-07-12 11:17:13.639824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.666 [2024-07-12 11:17:13.651746] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.666 [2024-07-12 11:17:13.651772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.666 [2024-07-12 11:17:13.661086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.666 [2024-07-12 11:17:13.661113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.666 [2024-07-12 11:17:13.672519] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.666 [2024-07-12 11:17:13.672545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.666 [2024-07-12 11:17:13.683339] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.666 [2024-07-12 11:17:13.683365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.666 [2024-07-12 11:17:13.693757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.666 [2024-07-12 11:17:13.693783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.666 [2024-07-12 11:17:13.704226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.666 [2024-07-12 11:17:13.704252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.666 [2024-07-12 11:17:13.714859] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.666 [2024-07-12 11:17:13.714897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.666 [2024-07-12 11:17:13.725627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.666 [2024-07-12 11:17:13.725653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.666 [2024-07-12 11:17:13.736193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.666 [2024-07-12 11:17:13.736219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.666 [2024-07-12 11:17:13.748384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.666 [2024-07-12 11:17:13.748411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.666 [2024-07-12 11:17:13.758366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.666 [2024-07-12 11:17:13.758392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.666 [2024-07-12 11:17:13.769169] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.666 [2024-07-12 11:17:13.769195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.666 [2024-07-12 11:17:13.781567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.666 [2024-07-12 11:17:13.781609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.666 [2024-07-12 11:17:13.791553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.666 [2024-07-12 11:17:13.791579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.924 [2024-07-12 11:17:13.802437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.924 [2024-07-12 11:17:13.802465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.924 [2024-07-12 11:17:13.813175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.924 [2024-07-12 11:17:13.813202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.924 [2024-07-12 11:17:13.824042] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.924 [2024-07-12 11:17:13.824069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.924 [2024-07-12 11:17:13.836283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.924 [2024-07-12 11:17:13.836309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.924 [2024-07-12 11:17:13.846198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.924 [2024-07-12 11:17:13.846224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.924 [2024-07-12 11:17:13.856386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.924 [2024-07-12 11:17:13.856412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.925 [2024-07-12 11:17:13.866790] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.925 [2024-07-12 11:17:13.866816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.925 [2024-07-12 11:17:13.877991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.925 [2024-07-12 11:17:13.878019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.925 [2024-07-12 11:17:13.888823] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.925 [2024-07-12 11:17:13.888874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.925 [2024-07-12 11:17:13.899939] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.925 [2024-07-12 11:17:13.899976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.925 [2024-07-12 11:17:13.912556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.925 [2024-07-12 11:17:13.912582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.925 [2024-07-12 11:17:13.924237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.925 [2024-07-12 11:17:13.924264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.925 [2024-07-12 11:17:13.933409] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.925 [2024-07-12 11:17:13.933436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.925 [2024-07-12 11:17:13.945144] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.925 [2024-07-12 11:17:13.945170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.925 [2024-07-12 11:17:13.955683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.925 [2024-07-12 11:17:13.955709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.925 [2024-07-12 11:17:13.966735] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.925 [2024-07-12 11:17:13.966761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.925 [2024-07-12 11:17:13.977491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.925 [2024-07-12 11:17:13.977517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.925 [2024-07-12 11:17:13.988337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.925 [2024-07-12 11:17:13.988363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.925 [2024-07-12 11:17:14.000603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.925 [2024-07-12 11:17:14.000629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.925 [2024-07-12 11:17:14.010623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.925 [2024-07-12 11:17:14.010650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.925 [2024-07-12 11:17:14.021621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.925 [2024-07-12 11:17:14.021648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.925 [2024-07-12 11:17:14.034765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.925 [2024-07-12 11:17:14.034792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.925 [2024-07-12 11:17:14.046423] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.925 [2024-07-12 11:17:14.046450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.925 [2024-07-12 11:17:14.055877] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.925 [2024-07-12 11:17:14.055905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.184 [2024-07-12 11:17:14.067798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.184 [2024-07-12 11:17:14.067825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.184 [2024-07-12 11:17:14.078368] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.184 [2024-07-12 11:17:14.078395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.184 [2024-07-12 11:17:14.089413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.184 [2024-07-12 11:17:14.089440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.184 [2024-07-12 11:17:14.101915] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.184 [2024-07-12 11:17:14.101942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.184 [2024-07-12 11:17:14.113662] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.184 [2024-07-12 11:17:14.113689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.184 [2024-07-12 11:17:14.122765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.184 [2024-07-12 11:17:14.122791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.184 [2024-07-12 11:17:14.134469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.184 [2024-07-12 11:17:14.134495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.184 [2024-07-12 11:17:14.147419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.184 [2024-07-12 11:17:14.147460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.184 [2024-07-12 11:17:14.158950] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.184 [2024-07-12 11:17:14.158978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.184 [2024-07-12 11:17:14.167596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.184 [2024-07-12 11:17:14.167622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.184 [2024-07-12 11:17:14.179367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.184 [2024-07-12 11:17:14.179393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.184 [2024-07-12 11:17:14.191786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.184 [2024-07-12 11:17:14.191812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.184 [2024-07-12 11:17:14.201994] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.184 [2024-07-12 11:17:14.202021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.184 [2024-07-12 11:17:14.212385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.184 [2024-07-12 11:17:14.212411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.184 [2024-07-12 11:17:14.222572] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.184 [2024-07-12 11:17:14.222598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.184 [2024-07-12 11:17:14.233170] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.184 [2024-07-12 11:17:14.233196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.184 [2024-07-12 11:17:14.243613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.184 [2024-07-12 11:17:14.243639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.184 [2024-07-12 11:17:14.254385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.184 [2024-07-12 11:17:14.254411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.184 [2024-07-12 11:17:14.264720] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.184 [2024-07-12 11:17:14.264746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.184 [2024-07-12 11:17:14.275239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.184 [2024-07-12 11:17:14.275265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.184 [2024-07-12 11:17:14.285715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.184 [2024-07-12 11:17:14.285741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.184 [2024-07-12 11:17:14.296217] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.184 [2024-07-12 11:17:14.296243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.184 [2024-07-12 11:17:14.306689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.184 [2024-07-12 11:17:14.306715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.442 [2024-07-12 11:17:14.317756] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.442 [2024-07-12 11:17:14.317805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.442 [2024-07-12 11:17:14.328430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.442 [2024-07-12 11:17:14.328457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.442 [2024-07-12 11:17:14.340979] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.442 [2024-07-12 11:17:14.341006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.442 [2024-07-12 11:17:14.350899] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.442 [2024-07-12 11:17:14.350926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.442 [2024-07-12 11:17:14.361605] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.442 [2024-07-12 11:17:14.361631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.442 [2024-07-12 11:17:14.373623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.442 [2024-07-12 11:17:14.373650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.442 [2024-07-12 11:17:14.383568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.442 [2024-07-12 11:17:14.383594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.442 [2024-07-12 11:17:14.395632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.442 [2024-07-12 11:17:14.395659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.442 [2024-07-12 11:17:14.405566] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.442 [2024-07-12 11:17:14.405592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.442 [2024-07-12 11:17:14.415913] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.442 [2024-07-12 11:17:14.415940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.442 [2024-07-12 11:17:14.426582] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.442 [2024-07-12 11:17:14.426608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.442 [2024-07-12 11:17:14.439268] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.442 [2024-07-12 11:17:14.439294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.442 [2024-07-12 11:17:14.450964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.442 [2024-07-12 11:17:14.450991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.442 [2024-07-12 11:17:14.460338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.442 [2024-07-12 11:17:14.460364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.442 [2024-07-12 11:17:14.472041] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.442 [2024-07-12 11:17:14.472069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.442 [2024-07-12 11:17:14.484581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.442 [2024-07-12 11:17:14.484608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.442 [2024-07-12 11:17:14.494940] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.442 [2024-07-12 11:17:14.494975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.443 [2024-07-12 11:17:14.505381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.443 [2024-07-12 11:17:14.505407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.443 [2024-07-12 11:17:14.515881] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.443 [2024-07-12 11:17:14.515908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.443 [2024-07-12 11:17:14.526242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.443 [2024-07-12 11:17:14.526283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.443 [2024-07-12 11:17:14.536779] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.443 [2024-07-12 11:17:14.536805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.443 [2024-07-12 11:17:14.549568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.443 [2024-07-12 11:17:14.549595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.443 [2024-07-12 11:17:14.559578] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.443 [2024-07-12 11:17:14.559604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.443 [2024-07-12 11:17:14.570216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.443 [2024-07-12 11:17:14.570244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.702 [2024-07-12 11:17:14.581839] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.702 [2024-07-12 11:17:14.581875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.702 [2024-07-12 11:17:14.592322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.702 [2024-07-12 11:17:14.592349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.702 [2024-07-12 11:17:14.602739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.702 [2024-07-12 11:17:14.602766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.702 [2024-07-12 11:17:14.613319] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.702 [2024-07-12 11:17:14.613346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.702 [2024-07-12 11:17:14.624074] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.702 [2024-07-12 11:17:14.624102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.702 [2024-07-12 11:17:14.634861] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.702 [2024-07-12 11:17:14.634896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.702 [2024-07-12 11:17:14.647438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.702 [2024-07-12 11:17:14.647466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.702 [2024-07-12 11:17:14.657514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.702 [2024-07-12 11:17:14.657541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.702 [2024-07-12 11:17:14.667575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.702 [2024-07-12 11:17:14.667601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.702 [2024-07-12 11:17:14.678053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.702 [2024-07-12 11:17:14.678080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.702 [2024-07-12 11:17:14.688681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.702 [2024-07-12 11:17:14.688707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.702 [2024-07-12 11:17:14.702590] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.702 [2024-07-12 11:17:14.702625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.702 [2024-07-12 11:17:14.713001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.702 [2024-07-12 11:17:14.713028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.702 [2024-07-12 11:17:14.723488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.702 [2024-07-12 11:17:14.723515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.702 [2024-07-12 11:17:14.734266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.702 [2024-07-12 11:17:14.734292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.702 [2024-07-12 11:17:14.744746] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.702 [2024-07-12 11:17:14.744772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.702 [2024-07-12 11:17:14.754824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.702 [2024-07-12 11:17:14.754875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.702 [2024-07-12 11:17:14.765287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.702 [2024-07-12 11:17:14.765314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.702 [2024-07-12 11:17:14.776008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.702 [2024-07-12 11:17:14.776035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.702 [2024-07-12 11:17:14.786573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.702 [2024-07-12 11:17:14.786600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.702 [2024-07-12 11:17:14.799369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.702 [2024-07-12 11:17:14.799397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.702 [2024-07-12 11:17:14.811030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.702 [2024-07-12 11:17:14.811057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.702 [2024-07-12 11:17:14.819759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.702 [2024-07-12 11:17:14.819786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.702 [2024-07-12 11:17:14.831319] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.702 [2024-07-12 11:17:14.831347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.961 [2024-07-12 11:17:14.842025] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.961 [2024-07-12 11:17:14.842053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.961 [2024-07-12 11:17:14.852351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.961 [2024-07-12 11:17:14.852377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.961 [2024-07-12 11:17:14.862819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.961 [2024-07-12 11:17:14.862846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.961 [2024-07-12 11:17:14.873162] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.961 [2024-07-12 11:17:14.873190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.961 [2024-07-12 11:17:14.883466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.961 [2024-07-12 11:17:14.883493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.962 [2024-07-12 11:17:14.893645] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.962 [2024-07-12 11:17:14.893672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.962 [2024-07-12 11:17:14.903857] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.962 [2024-07-12 11:17:14.903905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.962 [2024-07-12 11:17:14.914168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.962 [2024-07-12 11:17:14.914195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.962 [2024-07-12 11:17:14.924488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.962 [2024-07-12 11:17:14.924515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.962 [2024-07-12 11:17:14.935058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.962 [2024-07-12 11:17:14.935086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.962 [2024-07-12 11:17:14.945802] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.962 [2024-07-12 11:17:14.945830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.962 [2024-07-12 11:17:14.956313] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.962 [2024-07-12 11:17:14.956340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.962 [2024-07-12 11:17:14.968845] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.962 [2024-07-12 11:17:14.968879] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.962 [2024-07-12 11:17:14.980391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.962 [2024-07-12 11:17:14.980418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.962 [2024-07-12 11:17:14.989958] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.962 [2024-07-12 11:17:14.989985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.962 [2024-07-12 11:17:15.001154] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.962 [2024-07-12 11:17:15.001181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.962 [2024-07-12 11:17:15.013052] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.962 [2024-07-12 11:17:15.013079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.962 [2024-07-12 11:17:15.022470] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.962 [2024-07-12 11:17:15.022497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.962 [2024-07-12 11:17:15.033140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.962 [2024-07-12 11:17:15.033167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.962 [2024-07-12 11:17:15.045258] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.962 [2024-07-12 11:17:15.045285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.962 [2024-07-12 11:17:15.054113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.962 [2024-07-12 11:17:15.054140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.962 [2024-07-12 11:17:15.064749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.962 [2024-07-12 11:17:15.064776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.962 [2024-07-12 11:17:15.074805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.962 [2024-07-12 11:17:15.074832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.962 [2024-07-12 11:17:15.084723] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.962 [2024-07-12 11:17:15.084751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.220 [2024-07-12 11:17:15.095796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.220 [2024-07-12 11:17:15.095825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.220 [2024-07-12 11:17:15.108401] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.220 [2024-07-12 11:17:15.108437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.220 [2024-07-12 11:17:15.119712] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.220 [2024-07-12 11:17:15.119739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.220 [2024-07-12 11:17:15.128540] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.220 [2024-07-12 11:17:15.128567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.220 [2024-07-12 11:17:15.139835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.220 [2024-07-12 11:17:15.139861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.220 [2024-07-12 11:17:15.150529] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.220 [2024-07-12 11:17:15.150556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.221 [2024-07-12 11:17:15.160796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.221 [2024-07-12 11:17:15.160824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.221 [2024-07-12 11:17:15.171258] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.221 [2024-07-12 11:17:15.171286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.221 [2024-07-12 11:17:15.181851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.221 [2024-07-12 11:17:15.181887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.221 [2024-07-12 11:17:15.192152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.221 [2024-07-12 11:17:15.192180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.221 [2024-07-12 11:17:15.202816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.221 [2024-07-12 11:17:15.202843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.221 [2024-07-12 11:17:15.213090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.221 [2024-07-12 11:17:15.213118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.221 [2024-07-12 11:17:15.223614] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.221 [2024-07-12 11:17:15.223641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.221 [2024-07-12 11:17:15.234548] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.221 [2024-07-12 11:17:15.234576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.221 [2024-07-12 11:17:15.246969] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.221 [2024-07-12 11:17:15.246997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.221 [2024-07-12 11:17:15.256471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.221 [2024-07-12 11:17:15.256498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.221 [2024-07-12 11:17:15.269433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.221 [2024-07-12 11:17:15.269459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.221 [2024-07-12 11:17:15.279236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.221 [2024-07-12 11:17:15.279262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.221 [2024-07-12 11:17:15.289614] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.221 [2024-07-12 11:17:15.289641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.221 [2024-07-12 11:17:15.300458] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.221 [2024-07-12 11:17:15.300484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.221 [2024-07-12 11:17:15.311133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.221 [2024-07-12 11:17:15.311175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.221 [2024-07-12 11:17:15.322117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.221 [2024-07-12 11:17:15.322144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.221 [2024-07-12 11:17:15.332836] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.221 [2024-07-12 11:17:15.332888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.221 [2024-07-12 11:17:15.343180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.221 [2024-07-12 11:17:15.343208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.480 [2024-07-12 11:17:15.354027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.480 [2024-07-12 11:17:15.354055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.480 [2024-07-12 11:17:15.364559] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.480 [2024-07-12 11:17:15.364587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.480 [2024-07-12 11:17:15.375512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.480 [2024-07-12 11:17:15.375538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.480 [2024-07-12 11:17:15.386481] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.480 [2024-07-12 11:17:15.386507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.480 [2024-07-12 11:17:15.398919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.480 [2024-07-12 11:17:15.398947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.480 [2024-07-12 11:17:15.409257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.480 [2024-07-12 11:17:15.409283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.480 [2024-07-12 11:17:15.419681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.480 [2024-07-12 11:17:15.419707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.480 [2024-07-12 11:17:15.430227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.480 [2024-07-12 11:17:15.430253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.480 [2024-07-12 11:17:15.441288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.480 [2024-07-12 11:17:15.441314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.480 [2024-07-12 11:17:15.454200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.480 [2024-07-12 11:17:15.454226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.480 [2024-07-12 11:17:15.464635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.480 [2024-07-12 11:17:15.464661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.480 [2024-07-12 11:17:15.475426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.480 [2024-07-12 11:17:15.475453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.480 [2024-07-12 11:17:15.487610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.480 [2024-07-12 11:17:15.487636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.480 [2024-07-12 11:17:15.497483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.480 [2024-07-12 11:17:15.497509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.480 [2024-07-12 11:17:15.510006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.480 [2024-07-12 11:17:15.510033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.480 [2024-07-12 11:17:15.520204] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.480 [2024-07-12 11:17:15.520230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.480 [2024-07-12 11:17:15.530580] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.480 [2024-07-12 11:17:15.530606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.480 [2024-07-12 11:17:15.541102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.480 [2024-07-12 11:17:15.541130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.480 [2024-07-12 11:17:15.551495] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.480 [2024-07-12 11:17:15.551521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.480 [2024-07-12 11:17:15.561605] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.480 [2024-07-12 11:17:15.561630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.480 [2024-07-12 11:17:15.572318] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.480 [2024-07-12 11:17:15.572344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.480 [2024-07-12 11:17:15.582565] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.480 [2024-07-12 11:17:15.582591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.480 [2024-07-12 11:17:15.593083] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.480 [2024-07-12 11:17:15.593110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.480 [2024-07-12 11:17:15.603523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.480 [2024-07-12 11:17:15.603549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.739 [2024-07-12 11:17:15.614157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.739 [2024-07-12 11:17:15.614184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.739 [2024-07-12 11:17:15.625045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.739 [2024-07-12 11:17:15.625073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.739 [2024-07-12 11:17:15.635360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.739 [2024-07-12 11:17:15.635387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.739 [2024-07-12 11:17:15.645836] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.739 [2024-07-12 11:17:15.645888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.739 [2024-07-12 11:17:15.656155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.739 [2024-07-12 11:17:15.656182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.739 [2024-07-12 11:17:15.666798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.739 [2024-07-12 11:17:15.666833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.739 [2024-07-12 11:17:15.677309] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.739 [2024-07-12 11:17:15.677336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.739 [2024-07-12 11:17:15.690616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.739 [2024-07-12 11:17:15.690642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.739 [2024-07-12 11:17:15.700760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.739 [2024-07-12 11:17:15.700786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.739 [2024-07-12 11:17:15.711273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.739 [2024-07-12 11:17:15.711299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.739 [2024-07-12 11:17:15.724449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.739 [2024-07-12 11:17:15.724475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.739 [2024-07-12 11:17:15.736062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.739 [2024-07-12 11:17:15.736089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.739 [2024-07-12 11:17:15.745022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.739 [2024-07-12 11:17:15.745049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.739 [2024-07-12 11:17:15.755270] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.739 [2024-07-12 11:17:15.755295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.739 [2024-07-12 11:17:15.766066] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.739 [2024-07-12 11:17:15.766092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.739 [2024-07-12 11:17:15.778604] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.739 [2024-07-12 11:17:15.778630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.739 [2024-07-12 11:17:15.788784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.739 [2024-07-12 11:17:15.788811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.739 [2024-07-12 11:17:15.799469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.739 [2024-07-12 11:17:15.799496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.739 [2024-07-12 11:17:15.812087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.739 [2024-07-12 11:17:15.812114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.739 [2024-07-12 11:17:15.822111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.739 [2024-07-12 11:17:15.822153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.739 [2024-07-12 11:17:15.833409] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.739 [2024-07-12 11:17:15.833436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.739 [2024-07-12 11:17:15.844185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.739 [2024-07-12 11:17:15.844211] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.739 [2024-07-12 11:17:15.855088] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.739 [2024-07-12 11:17:15.855114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.739 [2024-07-12 11:17:15.866199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.739 [2024-07-12 11:17:15.866225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.998 [2024-07-12 11:17:15.877399] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.998 [2024-07-12 11:17:15.877428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.998 [2024-07-12 11:17:15.889797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.998 [2024-07-12 11:17:15.889824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.998 [2024-07-12 11:17:15.901381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.998 [2024-07-12 11:17:15.901408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.998 [2024-07-12 11:17:15.909905] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.998 [2024-07-12 11:17:15.909932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.998 [2024-07-12 11:17:15.923175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.998 [2024-07-12 11:17:15.923202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.998 [2024-07-12 11:17:15.933882] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.998 [2024-07-12 11:17:15.933908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.998 [2024-07-12 11:17:15.944541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.998 [2024-07-12 11:17:15.944567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.998 [2024-07-12 11:17:15.956949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.998 [2024-07-12 11:17:15.956976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.998 [2024-07-12 11:17:15.966961] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.998 [2024-07-12 11:17:15.966989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.998 [2024-07-12 11:17:15.977489] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.998 [2024-07-12 11:17:15.977515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.998 [2024-07-12 11:17:15.987808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.998 [2024-07-12 11:17:15.987835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.998 [2024-07-12 11:17:15.998335] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.998 [2024-07-12 11:17:15.998363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.998 [2024-07-12 11:17:16.008704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.998 [2024-07-12 11:17:16.008731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.998 [2024-07-12 11:17:16.019438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.998 [2024-07-12 11:17:16.019464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.998 [2024-07-12 11:17:16.030297] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.998 [2024-07-12 11:17:16.030323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.998 [2024-07-12 11:17:16.041589] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.998 [2024-07-12 11:17:16.041616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.998 [2024-07-12 11:17:16.052299] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.998 [2024-07-12 11:17:16.052340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.998 [2024-07-12 11:17:16.063000] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.998 [2024-07-12 11:17:16.063027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.998 [2024-07-12 11:17:16.073503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.998 [2024-07-12 11:17:16.073529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.998 [2024-07-12 11:17:16.084122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.998 [2024-07-12 11:17:16.084163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.998 [2024-07-12 11:17:16.094656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.998 [2024-07-12 11:17:16.094683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.998 [2024-07-12 11:17:16.105548] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.998 [2024-07-12 11:17:16.105575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.998 [2024-07-12 11:17:16.116175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.998 [2024-07-12 11:17:16.116201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.998 [2024-07-12 11:17:16.129019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.998 [2024-07-12 11:17:16.129054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.257 [2024-07-12 11:17:16.139555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.257 [2024-07-12 11:17:16.139582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.257 [2024-07-12 11:17:16.149978] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.258 [2024-07-12 11:17:16.150006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.258 [2024-07-12 11:17:16.160740] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.258 [2024-07-12 11:17:16.160766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.258 [2024-07-12 11:17:16.171248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.258 [2024-07-12 11:17:16.171274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.258 [2024-07-12 11:17:16.181753] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.258 [2024-07-12 11:17:16.181779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.258 [2024-07-12 11:17:16.192375] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.258 [2024-07-12 11:17:16.192401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.258 [2024-07-12 11:17:16.205632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.258 [2024-07-12 11:17:16.205659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.258 [2024-07-12 11:17:16.215897] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.258 [2024-07-12 11:17:16.215925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.258 [2024-07-12 11:17:16.226686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.258 [2024-07-12 11:17:16.226712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.258 [2024-07-12 11:17:16.237362] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.258 [2024-07-12 11:17:16.237389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.258 [2024-07-12 11:17:16.247825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.258 [2024-07-12 11:17:16.247873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.258 [2024-07-12 11:17:16.258569] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.258 [2024-07-12 11:17:16.258594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.258 [2024-07-12 11:17:16.271049] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.258 [2024-07-12 11:17:16.271077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.258 [2024-07-12 11:17:16.281132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.258 [2024-07-12 11:17:16.281159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.258 [2024-07-12 11:17:16.291211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.258 [2024-07-12 11:17:16.291238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.258 [2024-07-12 11:17:16.301395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.258 [2024-07-12 11:17:16.301422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.258 [2024-07-12 11:17:16.311797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.258 [2024-07-12 11:17:16.311825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.258 [2024-07-12 11:17:16.322447] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.258 [2024-07-12 11:17:16.322476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.258 [2024-07-12 11:17:16.333426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.258 [2024-07-12 11:17:16.333461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.258 [2024-07-12 11:17:16.345919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.258 [2024-07-12 11:17:16.345947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.258 [2024-07-12 11:17:16.356102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.258 [2024-07-12 11:17:16.356129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.258 [2024-07-12 11:17:16.366699] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.258 [2024-07-12 11:17:16.366727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.258 [2024-07-12 11:17:16.377338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.258 [2024-07-12 11:17:16.377365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.258 [2024-07-12 11:17:16.387974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.258 [2024-07-12 11:17:16.388002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.517 [2024-07-12 11:17:16.400408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.517 [2024-07-12 11:17:16.400436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.517 [2024-07-12 11:17:16.410544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.517 [2024-07-12 11:17:16.410571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.517 [2024-07-12 11:17:16.421004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.517 [2024-07-12 11:17:16.421032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.517 [2024-07-12 11:17:16.432028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.517 [2024-07-12 11:17:16.432055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.517 [2024-07-12 11:17:16.444394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.517 [2024-07-12 11:17:16.444422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.517 [2024-07-12 11:17:16.454145] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.517 [2024-07-12 11:17:16.454179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.517 [2024-07-12 11:17:16.464777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.517 [2024-07-12 11:17:16.464804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.517 [2024-07-12 11:17:16.475286] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.517 [2024-07-12 11:17:16.475313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.517 [2024-07-12 11:17:16.485918] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.517 [2024-07-12 11:17:16.485945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.517 [2024-07-12 11:17:16.496287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.517 [2024-07-12 11:17:16.496314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.517 [2024-07-12 11:17:16.506556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.517 [2024-07-12 11:17:16.506583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.517 [2024-07-12 11:17:16.517140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.517 [2024-07-12 11:17:16.517167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.517 [2024-07-12 11:17:16.527718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.517 [2024-07-12 11:17:16.527745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.517 [2024-07-12 11:17:16.538504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.517 [2024-07-12 11:17:16.538539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.517 [2024-07-12 11:17:16.550892] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.517 [2024-07-12 11:17:16.550919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.517 [2024-07-12 11:17:16.560993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.517 [2024-07-12 11:17:16.561019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.517 [2024-07-12 11:17:16.571447] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.517 [2024-07-12 11:17:16.571474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.517 [2024-07-12 11:17:16.581624] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.517 [2024-07-12 11:17:16.581651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.517 [2024-07-12 11:17:16.592332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.517 [2024-07-12 11:17:16.592359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.517 [2024-07-12 11:17:16.604614] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.517 [2024-07-12 11:17:16.604641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.517 [2024-07-12 11:17:16.614113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.517 [2024-07-12 11:17:16.614141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.517 [2024-07-12 11:17:16.624692] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.517 [2024-07-12 11:17:16.624719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.517 [2024-07-12 11:17:16.635660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.517 [2024-07-12 11:17:16.635688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.517 [2024-07-12 11:17:16.648575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.517 [2024-07-12 11:17:16.648603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.776 [2024-07-12 11:17:16.658896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.776 [2024-07-12 11:17:16.658923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.776 [2024-07-12 11:17:16.669193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.776 [2024-07-12 11:17:16.669221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.776 [2024-07-12 11:17:16.679224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.776 [2024-07-12 11:17:16.679252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.776 [2024-07-12 11:17:16.690001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.776 [2024-07-12 11:17:16.690029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.776 [2024-07-12 11:17:16.702516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.776 [2024-07-12 11:17:16.702543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.776 [2024-07-12 11:17:16.712131] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.776 [2024-07-12 11:17:16.712158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.776 [2024-07-12 11:17:16.722635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.776 [2024-07-12 11:17:16.722662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.776 [2024-07-12 11:17:16.735280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.776 [2024-07-12 11:17:16.735307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.776 [2024-07-12 11:17:16.746690] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.776 [2024-07-12 11:17:16.746725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.776 [2024-07-12 11:17:16.755196] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.776 [2024-07-12 11:17:16.755223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.776 [2024-07-12 11:17:16.767992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.776 [2024-07-12 11:17:16.768019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.776 [2024-07-12 11:17:16.777754] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.776 [2024-07-12 11:17:16.777781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.776 [2024-07-12 11:17:16.787960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.776 [2024-07-12 11:17:16.787994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.776 [2024-07-12 11:17:16.798542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.776 [2024-07-12 11:17:16.798569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.776 [2024-07-12 11:17:16.809501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.776 [2024-07-12 11:17:16.809528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.776 [2024-07-12 11:17:16.823526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.776 [2024-07-12 11:17:16.823554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.776 [2024-07-12 11:17:16.833431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.776 [2024-07-12 11:17:16.833457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.776 [2024-07-12 11:17:16.844077] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.776 [2024-07-12 11:17:16.844104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.776 [2024-07-12 11:17:16.854891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.776 [2024-07-12 11:17:16.854919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.776 [2024-07-12 11:17:16.868563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.776 [2024-07-12 11:17:16.868588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.776 [2024-07-12 11:17:16.878664] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.776 [2024-07-12 11:17:16.878691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.776 [2024-07-12 11:17:16.889418] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.776 [2024-07-12 11:17:16.889443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.776 [2024-07-12 11:17:16.902362] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.776 [2024-07-12 11:17:16.902388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.035 [2024-07-12 11:17:16.915068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.035 [2024-07-12 11:17:16.915097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.035 [2024-07-12 11:17:16.924475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.035 [2024-07-12 11:17:16.924502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.035 [2024-07-12 11:17:16.937316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.035 [2024-07-12 11:17:16.937357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.035 [2024-07-12 11:17:16.947761] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.035 [2024-07-12 11:17:16.947787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.035 [2024-07-12 11:17:16.958516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.035 [2024-07-12 11:17:16.958543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.035 [2024-07-12 11:17:16.970693] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.035 [2024-07-12 11:17:16.970719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.035 [2024-07-12 11:17:16.980902] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.035 [2024-07-12 11:17:16.980929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.035 [2024-07-12 11:17:16.991722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.035 [2024-07-12 11:17:16.991749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.035 [2024-07-12 11:17:17.004474] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.035 [2024-07-12 11:17:17.004501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.035 [2024-07-12 11:17:17.014490] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.035 [2024-07-12 11:17:17.014517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.035 [2024-07-12 11:17:17.024832] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.035 [2024-07-12 11:17:17.024882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.035 [2024-07-12 11:17:17.035401] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.035 [2024-07-12 11:17:17.035443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.035 [2024-07-12 11:17:17.046163] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.035 [2024-07-12 11:17:17.046190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.035 [2024-07-12 11:17:17.057389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.035 [2024-07-12 11:17:17.057415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.035 [2024-07-12 11:17:17.069537] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.035 [2024-07-12 11:17:17.069577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.035 [2024-07-12 11:17:17.079535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.035 [2024-07-12 11:17:17.079560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.035 [2024-07-12 11:17:17.090930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.035 [2024-07-12 11:17:17.090957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.035 [2024-07-12 11:17:17.101265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.035 [2024-07-12 11:17:17.101291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.035 [2024-07-12 11:17:17.112071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.035 [2024-07-12 11:17:17.112098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.035 [2024-07-12 11:17:17.124695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.035 [2024-07-12 11:17:17.124736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.035 [2024-07-12 11:17:17.134315] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.035 [2024-07-12 11:17:17.134357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.035 [2024-07-12 11:17:17.144659] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.035 [2024-07-12 11:17:17.144685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.035 [2024-07-12 11:17:17.155500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.035 [2024-07-12 11:17:17.155526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.036 [2024-07-12 11:17:17.166445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.036 [2024-07-12 11:17:17.166488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.294 [2024-07-12 11:17:17.179216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.294 [2024-07-12 11:17:17.179243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.294 [2024-07-12 11:17:17.189022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.294 [2024-07-12 11:17:17.189049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.294 [2024-07-12 11:17:17.199730] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.294 [2024-07-12 11:17:17.199756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.294 [2024-07-12 11:17:17.210552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.294 [2024-07-12 11:17:17.210578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.294 [2024-07-12 11:17:17.221368] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.294 [2024-07-12 11:17:17.221394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.294 [2024-07-12 11:17:17.234993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.294 [2024-07-12 11:17:17.235020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.294 [2024-07-12 11:17:17.245416] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.294 [2024-07-12 11:17:17.245443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.294 [2024-07-12 11:17:17.255965] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.294 [2024-07-12 11:17:17.255992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.294 [2024-07-12 11:17:17.266491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.294 [2024-07-12 11:17:17.266517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.294 [2024-07-12 11:17:17.276877] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.294 [2024-07-12 11:17:17.276903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.294 [2024-07-12 11:17:17.287072] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.294 [2024-07-12 11:17:17.287099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.294 [2024-07-12 11:17:17.297561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.294 [2024-07-12 11:17:17.297587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.294 [2024-07-12 11:17:17.308172] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.294 [2024-07-12 11:17:17.308198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.294 [2024-07-12 11:17:17.318556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.294 [2024-07-12 11:17:17.318582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.294 [2024-07-12 11:17:17.328827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.294 [2024-07-12 11:17:17.328875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.294 [2024-07-12 11:17:17.339333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.294 [2024-07-12 11:17:17.339359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.294 [2024-07-12 11:17:17.349773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.294 [2024-07-12 11:17:17.349798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.294 [2024-07-12 11:17:17.360593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.294 [2024-07-12 11:17:17.360619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.294 [2024-07-12 11:17:17.372757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.294 [2024-07-12 11:17:17.372783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.295 [2024-07-12 11:17:17.381812] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.295 [2024-07-12 11:17:17.381838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.295 [2024-07-12 11:17:17.395999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.295 [2024-07-12 11:17:17.396026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.295 [2024-07-12 11:17:17.406534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.295 [2024-07-12 11:17:17.406560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.295 [2024-07-12 11:17:17.417247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.295 [2024-07-12 11:17:17.417273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.553 [2024-07-12 11:17:17.430056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.553 [2024-07-12 11:17:17.430084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.553 [2024-07-12 11:17:17.440259] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.553 [2024-07-12 11:17:17.440285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.553 [2024-07-12 11:17:17.450971] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.553 [2024-07-12 11:17:17.450999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.553 [2024-07-12 11:17:17.463453] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.553 [2024-07-12 11:17:17.463481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.553 [2024-07-12 11:17:17.473499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.553 [2024-07-12 11:17:17.473525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.553 [2024-07-12 11:17:17.484035] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.553 [2024-07-12 11:17:17.484063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.553 [2024-07-12 11:17:17.494463] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.553 [2024-07-12 11:17:17.494490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.553 [2024-07-12 11:17:17.504130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.553 [2024-07-12 11:17:17.504171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.553 [2024-07-12 11:17:17.509676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.553 [2024-07-12 11:17:17.509700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.553 00:12:51.553 Latency(us) 00:12:51.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:51.553 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:51.553 Nvme1n1 : 5.01 12005.25 93.79 0.00 0.00 10647.72 4466.16 18641.35 00:12:51.553 =================================================================================================================== 00:12:51.553 Total : 12005.25 93.79 0.00 0.00 10647.72 4466.16 18641.35 00:12:51.553 [2024-07-12 11:17:17.517696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.553 [2024-07-12 11:17:17.517719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.553 [2024-07-12 11:17:17.525714] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.553 [2024-07-12 11:17:17.525743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.553 [2024-07-12 11:17:17.533774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.553 [2024-07-12 11:17:17.533810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.553 [2024-07-12 11:17:17.541824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.553 [2024-07-12 11:17:17.541879] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.553 [2024-07-12 11:17:17.549846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.553 [2024-07-12 11:17:17.549900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.553 [2024-07-12 11:17:17.557880] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.553 [2024-07-12 11:17:17.557929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.553 [2024-07-12 11:17:17.565899] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.554 [2024-07-12 11:17:17.565949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.554 [2024-07-12 11:17:17.573921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.554 [2024-07-12 11:17:17.573971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.554 [2024-07-12 11:17:17.581937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.554 [2024-07-12 11:17:17.581987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.554 [2024-07-12 11:17:17.589963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.554 [2024-07-12 11:17:17.590013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.554 [2024-07-12 11:17:17.597996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.554 [2024-07-12 11:17:17.598044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.554 [2024-07-12 11:17:17.606015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.554 [2024-07-12 11:17:17.606068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.554 [2024-07-12 11:17:17.614031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.554 [2024-07-12 11:17:17.614083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.554 [2024-07-12 11:17:17.622052] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.554 [2024-07-12 11:17:17.622103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.554 [2024-07-12 11:17:17.630070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.554 [2024-07-12 11:17:17.630122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.554 [2024-07-12 11:17:17.638087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.554 [2024-07-12 11:17:17.638137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.554 [2024-07-12 11:17:17.646069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.554 [2024-07-12 11:17:17.646100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.554 [2024-07-12 11:17:17.654066] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.554 [2024-07-12 11:17:17.654089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.554 [2024-07-12 11:17:17.662087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.554 [2024-07-12 11:17:17.662109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.554 [2024-07-12 11:17:17.670112] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.554 [2024-07-12 11:17:17.670134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.554 [2024-07-12 11:17:17.678134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.554 [2024-07-12 11:17:17.678167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.812 [2024-07-12 11:17:17.686230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.812 [2024-07-12 11:17:17.686280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.812 [2024-07-12 11:17:17.694251] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.812 [2024-07-12 11:17:17.694302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.812 [2024-07-12 11:17:17.702223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.812 [2024-07-12 11:17:17.702260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.812 [2024-07-12 11:17:17.710244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.812 [2024-07-12 11:17:17.710263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.812 [2024-07-12 11:17:17.718255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.812 [2024-07-12 11:17:17.718276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.812 [2024-07-12 11:17:17.726272] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.812 [2024-07-12 11:17:17.726292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.812 [2024-07-12 11:17:17.734311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.812 [2024-07-12 11:17:17.734342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.812 [2024-07-12 11:17:17.742372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.812 [2024-07-12 11:17:17.742421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.812 [2024-07-12 11:17:17.750394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.812 [2024-07-12 11:17:17.750445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.812 [2024-07-12 11:17:17.758347] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.812 [2024-07-12 11:17:17.758368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.812 [2024-07-12 11:17:17.766367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.812 [2024-07-12 11:17:17.766386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.812 [2024-07-12 11:17:17.774384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.812 [2024-07-12 11:17:17.774404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (551403) - No such process 00:12:51.812 11:17:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 551403 00:12:51.812 11:17:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.812 11:17:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.812 11:17:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:51.813 11:17:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.813 11:17:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:51.813 11:17:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.813 11:17:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:51.813 delay0 00:12:51.813 11:17:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.813 11:17:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:51.813 11:17:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.813 11:17:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:51.813 11:17:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.813 11:17:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:12:51.813 EAL: No free 2048 kB hugepages reported on node 1 00:12:51.813 [2024-07-12 11:17:17.854803] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:58.420 Initializing NVMe Controllers 00:12:58.420 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:58.420 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:58.420 Initialization complete. Launching workers. 00:12:58.420 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 67 00:12:58.420 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 354, failed to submit 33 00:12:58.420 success 158, unsuccess 196, failed 0 00:12:58.420 11:17:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:12:58.420 11:17:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:12:58.420 11:17:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:58.420 11:17:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:12:58.420 11:17:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:58.420 11:17:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:12:58.420 11:17:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:58.420 11:17:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:58.420 rmmod nvme_tcp 00:12:58.420 rmmod nvme_fabrics 00:12:58.420 rmmod nvme_keyring 00:12:58.420 11:17:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:58.420 11:17:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:12:58.420 11:17:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:12:58.420 11:17:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 550070 ']' 00:12:58.420 11:17:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 550070 00:12:58.420 11:17:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 550070 ']' 00:12:58.420 11:17:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 550070 00:12:58.420 11:17:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:12:58.420 11:17:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:58.420 11:17:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 550070 00:12:58.420 11:17:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:58.420 11:17:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:58.420 11:17:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 550070' 00:12:58.420 killing process with pid 550070 00:12:58.420 11:17:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 550070 00:12:58.420 11:17:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 550070 00:12:58.421 11:17:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:58.421 11:17:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:58.421 11:17:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:58.421 11:17:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:58.421 11:17:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:58.421 11:17:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.421 11:17:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:58.421 11:17:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.327 11:17:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:00.327 00:13:00.327 real 0m27.793s 00:13:00.327 user 0m41.010s 00:13:00.327 sys 0m8.164s 00:13:00.327 11:17:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:00.327 11:17:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:00.327 ************************************ 00:13:00.327 END TEST nvmf_zcopy 00:13:00.327 ************************************ 00:13:00.327 11:17:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:00.327 11:17:26 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:00.327 11:17:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:00.327 11:17:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:00.327 11:17:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:00.327 ************************************ 00:13:00.327 START TEST nvmf_nmic 00:13:00.327 ************************************ 00:13:00.327 11:17:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:00.327 * Looking for test storage... 00:13:00.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:13:00.328 11:17:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:02.858 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:02.858 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:02.858 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:02.858 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:02.859 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:02.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:02.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:13:02.859 00:13:02.859 --- 10.0.0.2 ping statistics --- 00:13:02.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.859 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:02.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:02.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:13:02.859 00:13:02.859 --- 10.0.0.1 ping statistics --- 00:13:02.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.859 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=554666 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 554666 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 554666 ']' 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:02.859 11:17:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:02.859 [2024-07-12 11:17:28.735472] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:13:02.859 [2024-07-12 11:17:28.735542] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.859 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.859 [2024-07-12 11:17:28.799507] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:02.859 [2024-07-12 11:17:28.913805] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.859 [2024-07-12 11:17:28.913878] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.859 [2024-07-12 11:17:28.913894] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:02.859 [2024-07-12 11:17:28.913905] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:02.859 [2024-07-12 11:17:28.913931] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.859 [2024-07-12 11:17:28.913981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.859 [2024-07-12 11:17:28.914381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.859 [2024-07-12 11:17:28.914451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.859 [2024-07-12 11:17:28.914447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:03.117 11:17:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:03.117 11:17:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:13:03.117 11:17:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:03.117 11:17:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:03.117 11:17:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:03.117 11:17:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.117 11:17:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:03.117 11:17:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.117 11:17:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:03.117 [2024-07-12 11:17:29.082609] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:03.117 11:17:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.117 11:17:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:03.117 11:17:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.118 11:17:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:03.118 Malloc0 00:13:03.118 11:17:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.118 11:17:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:03.118 11:17:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.118 11:17:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:03.118 11:17:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.118 11:17:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:03.118 11:17:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.118 11:17:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:03.118 11:17:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.118 11:17:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.118 11:17:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.118 11:17:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:03.118 [2024-07-12 11:17:29.135947] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.118 11:17:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.118 11:17:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:03.118 test case1: single bdev can't be used in multiple subsystems 00:13:03.118 11:17:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:03.118 11:17:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.118 11:17:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:03.118 11:17:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.118 11:17:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:03.118 11:17:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.118 11:17:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:03.118 11:17:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.118 11:17:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:03.118 11:17:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:03.118 11:17:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.118 11:17:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:03.118 [2024-07-12 11:17:29.159785] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:03.118 [2024-07-12 11:17:29.159813] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:03.118 [2024-07-12 11:17:29.159844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.118 request: 00:13:03.118 { 00:13:03.118 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:03.118 "namespace": { 00:13:03.118 "bdev_name": "Malloc0", 00:13:03.118 "no_auto_visible": false 00:13:03.118 }, 00:13:03.118 "method": "nvmf_subsystem_add_ns", 00:13:03.118 "req_id": 1 00:13:03.118 } 00:13:03.118 Got JSON-RPC error response 00:13:03.118 response: 00:13:03.118 { 00:13:03.118 "code": -32602, 00:13:03.118 "message": "Invalid parameters" 00:13:03.118 } 00:13:03.118 11:17:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:13:03.118 11:17:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:03.118 11:17:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:03.118 11:17:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:03.118 Adding namespace failed - expected result. 00:13:03.118 11:17:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:03.118 test case2: host connect to nvmf target in multiple paths 00:13:03.118 11:17:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:03.118 11:17:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.118 11:17:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:03.118 [2024-07-12 11:17:29.167929] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:03.118 11:17:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.118 11:17:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:03.682 11:17:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:04.247 11:17:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:04.247 11:17:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:13:04.247 11:17:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:04.247 11:17:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:04.247 11:17:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:13:06.771 11:17:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:06.771 11:17:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:06.771 11:17:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:06.771 11:17:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:06.771 11:17:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:06.771 11:17:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:13:06.771 11:17:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:06.771 [global] 00:13:06.771 thread=1 00:13:06.771 invalidate=1 00:13:06.771 rw=write 00:13:06.771 time_based=1 00:13:06.771 runtime=1 00:13:06.771 ioengine=libaio 00:13:06.771 direct=1 00:13:06.771 bs=4096 00:13:06.771 iodepth=1 00:13:06.771 norandommap=0 00:13:06.771 numjobs=1 00:13:06.771 00:13:06.771 verify_dump=1 00:13:06.771 verify_backlog=512 00:13:06.771 verify_state_save=0 00:13:06.771 do_verify=1 00:13:06.771 verify=crc32c-intel 00:13:06.771 [job0] 00:13:06.771 filename=/dev/nvme0n1 00:13:06.771 Could not set queue depth (nvme0n1) 00:13:06.771 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:06.771 fio-3.35 00:13:06.771 Starting 1 thread 00:13:07.702 00:13:07.702 job0: (groupid=0, jobs=1): err= 0: pid=555295: Fri Jul 12 11:17:33 2024 00:13:07.702 read: IOPS=21, BW=84.8KiB/s (86.8kB/s)(88.0KiB/1038msec) 00:13:07.702 slat (nsec): min=7673, max=34788, avg=22341.82, stdev=9842.80 00:13:07.702 clat (usec): min=40913, max=42101, avg=41931.98, stdev=234.26 00:13:07.702 lat (usec): min=40947, max=42108, avg=41954.32, stdev=230.78 00:13:07.702 clat percentiles (usec): 00:13:07.702 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:13:07.702 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:13:07.702 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:07.702 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:07.702 | 99.99th=[42206] 00:13:07.702 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:13:07.702 slat (usec): min=5, max=30625, avg=67.09, stdev=1353.15 00:13:07.702 clat (usec): min=134, max=354, avg=155.66, stdev=27.22 00:13:07.702 lat (usec): min=140, max=30830, avg=222.75, stdev=1355.61 00:13:07.702 clat percentiles (usec): 00:13:07.702 | 1.00th=[ 135], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 141], 00:13:07.702 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 149], 00:13:07.702 | 70.00th=[ 153], 80.00th=[ 159], 90.00th=[ 178], 95.00th=[ 241], 00:13:07.702 | 99.00th=[ 247], 99.50th=[ 251], 99.90th=[ 355], 99.95th=[ 355], 00:13:07.702 | 99.99th=[ 355] 00:13:07.702 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:13:07.702 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:07.702 lat (usec) : 250=95.32%, 500=0.56% 00:13:07.702 lat (msec) : 50=4.12% 00:13:07.702 cpu : usr=0.00%, sys=0.58%, ctx=537, majf=0, minf=2 00:13:07.702 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:07.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:07.702 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:07.702 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:07.702 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:07.702 00:13:07.702 Run status group 0 (all jobs): 00:13:07.702 READ: bw=84.8KiB/s (86.8kB/s), 84.8KiB/s-84.8KiB/s (86.8kB/s-86.8kB/s), io=88.0KiB (90.1kB), run=1038-1038msec 00:13:07.702 WRITE: bw=1973KiB/s (2020kB/s), 1973KiB/s-1973KiB/s (2020kB/s-2020kB/s), io=2048KiB (2097kB), run=1038-1038msec 00:13:07.702 00:13:07.702 Disk stats (read/write): 00:13:07.702 nvme0n1: ios=45/512, merge=0/0, ticks=1740/79, in_queue=1819, util=98.60% 00:13:07.702 11:17:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:07.959 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:07.959 11:17:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:07.959 11:17:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:13:07.959 11:17:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:07.959 11:17:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.959 11:17:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:07.959 11:17:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.959 11:17:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:13:07.959 11:17:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:07.959 11:17:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:07.959 11:17:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:07.959 11:17:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:13:07.959 11:17:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:07.959 11:17:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:13:07.959 11:17:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:07.959 11:17:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:07.959 rmmod nvme_tcp 00:13:07.959 rmmod nvme_fabrics 00:13:07.959 rmmod nvme_keyring 00:13:07.959 11:17:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:07.959 11:17:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:13:07.959 11:17:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:13:07.959 11:17:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 554666 ']' 00:13:07.959 11:17:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 554666 00:13:07.959 11:17:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 554666 ']' 00:13:07.959 11:17:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 554666 00:13:07.959 11:17:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:13:07.959 11:17:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:07.959 11:17:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 554666 00:13:07.959 11:17:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:07.959 11:17:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:07.959 11:17:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 554666' 00:13:07.959 killing process with pid 554666 00:13:07.959 11:17:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 554666 00:13:07.959 11:17:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 554666 00:13:08.218 11:17:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:08.218 11:17:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:08.218 11:17:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:08.218 11:17:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:08.218 11:17:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:08.218 11:17:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.218 11:17:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:08.218 11:17:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:10.753 11:17:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:10.753 00:13:10.753 real 0m10.011s 00:13:10.753 user 0m22.278s 00:13:10.753 sys 0m2.388s 00:13:10.753 11:17:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:10.753 11:17:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:10.753 ************************************ 00:13:10.753 END TEST nvmf_nmic 00:13:10.753 ************************************ 00:13:10.753 11:17:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:10.753 11:17:36 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:10.753 11:17:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:10.753 11:17:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:10.753 11:17:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:10.753 ************************************ 00:13:10.753 START TEST nvmf_fio_target 00:13:10.753 ************************************ 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:10.753 * Looking for test storage... 00:13:10.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:10.753 11:17:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:12.657 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:12.657 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:12.657 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:12.657 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:12.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:12.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:13:12.657 00:13:12.657 --- 10.0.0.2 ping statistics --- 00:13:12.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.657 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:13:12.657 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:12.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:12.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:13:12.657 00:13:12.657 --- 10.0.0.1 ping statistics --- 00:13:12.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.658 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:13:12.658 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:12.658 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:13:12.658 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:12.658 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:12.658 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:12.658 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:12.658 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:12.658 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:12.658 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:12.658 11:17:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:12.658 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:12.658 11:17:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:12.658 11:17:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.658 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=557370 00:13:12.658 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:12.658 11:17:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 557370 00:13:12.658 11:17:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 557370 ']' 00:13:12.658 11:17:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.658 11:17:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:12.658 11:17:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.658 11:17:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:12.658 11:17:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.658 [2024-07-12 11:17:38.709798] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:13:12.658 [2024-07-12 11:17:38.709904] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.658 EAL: No free 2048 kB hugepages reported on node 1 00:13:12.658 [2024-07-12 11:17:38.776803] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:12.916 [2024-07-12 11:17:38.886504] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:12.916 [2024-07-12 11:17:38.886554] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:12.916 [2024-07-12 11:17:38.886582] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:12.916 [2024-07-12 11:17:38.886594] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:12.916 [2024-07-12 11:17:38.886604] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:12.916 [2024-07-12 11:17:38.886667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:12.916 [2024-07-12 11:17:38.886768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:12.916 [2024-07-12 11:17:38.886823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:12.916 [2024-07-12 11:17:38.886826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.916 11:17:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:12.916 11:17:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:13:12.916 11:17:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:12.916 11:17:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:12.916 11:17:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.916 11:17:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:12.916 11:17:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:13.481 [2024-07-12 11:17:39.324734] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:13.481 11:17:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:13.739 11:17:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:13.739 11:17:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:13.997 11:17:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:13.997 11:17:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:14.255 11:17:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:14.255 11:17:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:14.514 11:17:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:14.514 11:17:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:14.771 11:17:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:15.030 11:17:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:15.030 11:17:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:15.287 11:17:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:15.287 11:17:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:15.544 11:17:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:15.544 11:17:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:15.803 11:17:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:16.060 11:17:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:16.060 11:17:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:16.318 11:17:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:16.318 11:17:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:16.575 11:17:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:16.833 [2024-07-12 11:17:42.784724] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:16.833 11:17:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:17.098 11:17:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:17.363 11:17:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:17.926 11:17:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:17.927 11:17:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:13:17.927 11:17:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:17.927 11:17:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:13:17.927 11:17:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:13:17.927 11:17:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:13:20.452 11:17:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:20.452 11:17:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:20.452 11:17:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:20.452 11:17:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:13:20.452 11:17:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:20.452 11:17:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:13:20.452 11:17:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:20.452 [global] 00:13:20.452 thread=1 00:13:20.452 invalidate=1 00:13:20.452 rw=write 00:13:20.452 time_based=1 00:13:20.452 runtime=1 00:13:20.452 ioengine=libaio 00:13:20.452 direct=1 00:13:20.452 bs=4096 00:13:20.452 iodepth=1 00:13:20.452 norandommap=0 00:13:20.452 numjobs=1 00:13:20.452 00:13:20.452 verify_dump=1 00:13:20.452 verify_backlog=512 00:13:20.452 verify_state_save=0 00:13:20.452 do_verify=1 00:13:20.452 verify=crc32c-intel 00:13:20.452 [job0] 00:13:20.452 filename=/dev/nvme0n1 00:13:20.452 [job1] 00:13:20.452 filename=/dev/nvme0n2 00:13:20.452 [job2] 00:13:20.452 filename=/dev/nvme0n3 00:13:20.452 [job3] 00:13:20.452 filename=/dev/nvme0n4 00:13:20.452 Could not set queue depth (nvme0n1) 00:13:20.452 Could not set queue depth (nvme0n2) 00:13:20.452 Could not set queue depth (nvme0n3) 00:13:20.452 Could not set queue depth (nvme0n4) 00:13:20.452 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:20.452 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:20.452 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:20.452 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:20.453 fio-3.35 00:13:20.453 Starting 4 threads 00:13:21.384 00:13:21.384 job0: (groupid=0, jobs=1): err= 0: pid=558450: Fri Jul 12 11:17:47 2024 00:13:21.384 read: IOPS=71, BW=286KiB/s (293kB/s)(292KiB/1020msec) 00:13:21.384 slat (nsec): min=5892, max=50543, avg=16435.36, stdev=8210.38 00:13:21.384 clat (usec): min=228, max=43002, avg=12206.37, stdev=18829.74 00:13:21.384 lat (usec): min=245, max=43021, avg=12222.80, stdev=18832.91 00:13:21.384 clat percentiles (usec): 00:13:21.384 | 1.00th=[ 229], 5.00th=[ 243], 10.00th=[ 265], 20.00th=[ 289], 00:13:21.384 | 30.00th=[ 326], 40.00th=[ 338], 50.00th=[ 351], 60.00th=[ 359], 00:13:21.384 | 70.00th=[ 515], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:13:21.384 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:13:21.384 | 99.99th=[43254] 00:13:21.384 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:13:21.384 slat (nsec): min=7181, max=66466, avg=17665.01, stdev=7554.13 00:13:21.384 clat (usec): min=153, max=529, avg=225.44, stdev=45.56 00:13:21.384 lat (usec): min=165, max=550, avg=243.11, stdev=45.12 00:13:21.384 clat percentiles (usec): 00:13:21.384 | 1.00th=[ 167], 5.00th=[ 182], 10.00th=[ 190], 20.00th=[ 194], 00:13:21.384 | 30.00th=[ 200], 40.00th=[ 206], 50.00th=[ 215], 60.00th=[ 225], 00:13:21.384 | 70.00th=[ 235], 80.00th=[ 247], 90.00th=[ 269], 95.00th=[ 306], 00:13:21.384 | 99.00th=[ 412], 99.50th=[ 445], 99.90th=[ 529], 99.95th=[ 529], 00:13:21.384 | 99.99th=[ 529] 00:13:21.385 bw ( KiB/s): min= 4096, max= 4096, per=25.73%, avg=4096.00, stdev= 0.00, samples=1 00:13:21.385 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:21.385 lat (usec) : 250=73.50%, 500=22.56%, 750=0.34% 00:13:21.385 lat (msec) : 50=3.59% 00:13:21.385 cpu : usr=0.69%, sys=1.28%, ctx=585, majf=0, minf=1 00:13:21.385 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:21.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:21.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:21.385 issued rwts: total=73,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:21.385 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:21.385 job1: (groupid=0, jobs=1): err= 0: pid=558451: Fri Jul 12 11:17:47 2024 00:13:21.385 read: IOPS=21, BW=85.5KiB/s (87.6kB/s)(88.0KiB/1029msec) 00:13:21.385 slat (nsec): min=12436, max=33228, avg=18081.64, stdev=6164.20 00:13:21.385 clat (usec): min=40966, max=42001, avg=41870.87, stdev=295.20 00:13:21.385 lat (usec): min=40979, max=42017, avg=41888.96, stdev=295.93 00:13:21.385 clat percentiles (usec): 00:13:21.385 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:13:21.385 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:13:21.385 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:21.385 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:21.385 | 99.99th=[42206] 00:13:21.385 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:13:21.385 slat (nsec): min=7005, max=36597, avg=15117.29, stdev=6157.73 00:13:21.385 clat (usec): min=148, max=309, avg=191.28, stdev=17.11 00:13:21.385 lat (usec): min=156, max=317, avg=206.39, stdev=18.05 00:13:21.385 clat percentiles (usec): 00:13:21.385 | 1.00th=[ 157], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 178], 00:13:21.385 | 30.00th=[ 182], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 196], 00:13:21.385 | 70.00th=[ 198], 80.00th=[ 202], 90.00th=[ 210], 95.00th=[ 221], 00:13:21.385 | 99.00th=[ 243], 99.50th=[ 245], 99.90th=[ 310], 99.95th=[ 310], 00:13:21.385 | 99.99th=[ 310] 00:13:21.385 bw ( KiB/s): min= 4096, max= 4096, per=25.73%, avg=4096.00, stdev= 0.00, samples=1 00:13:21.385 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:21.385 lat (usec) : 250=95.51%, 500=0.37% 00:13:21.385 lat (msec) : 50=4.12% 00:13:21.385 cpu : usr=0.58%, sys=0.58%, ctx=538, majf=0, minf=1 00:13:21.385 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:21.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:21.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:21.385 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:21.385 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:21.385 job2: (groupid=0, jobs=1): err= 0: pid=558452: Fri Jul 12 11:17:47 2024 00:13:21.385 read: IOPS=2215, BW=8863KiB/s (9076kB/s)(8872KiB/1001msec) 00:13:21.385 slat (nsec): min=5167, max=63128, avg=12677.42, stdev=6151.66 00:13:21.385 clat (usec): min=170, max=1001, avg=216.50, stdev=48.03 00:13:21.385 lat (usec): min=176, max=1007, avg=229.18, stdev=49.35 00:13:21.385 clat percentiles (usec): 00:13:21.385 | 1.00th=[ 178], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 190], 00:13:21.385 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 206], 00:13:21.385 | 70.00th=[ 219], 80.00th=[ 239], 90.00th=[ 260], 95.00th=[ 285], 00:13:21.385 | 99.00th=[ 461], 99.50th=[ 486], 99.90th=[ 611], 99.95th=[ 799], 00:13:21.385 | 99.99th=[ 1004] 00:13:21.385 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:13:21.385 slat (usec): min=7, max=17562, avg=22.22, stdev=346.85 00:13:21.385 clat (usec): min=125, max=421, avg=162.78, stdev=30.12 00:13:21.385 lat (usec): min=133, max=17932, avg=185.01, stdev=352.29 00:13:21.385 clat percentiles (usec): 00:13:21.385 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 143], 00:13:21.385 | 30.00th=[ 147], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 155], 00:13:21.385 | 70.00th=[ 161], 80.00th=[ 180], 90.00th=[ 208], 95.00th=[ 229], 00:13:21.385 | 99.00th=[ 255], 99.50th=[ 285], 99.90th=[ 392], 99.95th=[ 416], 00:13:21.385 | 99.99th=[ 420] 00:13:21.385 bw ( KiB/s): min=10976, max=10976, per=68.93%, avg=10976.00, stdev= 0.00, samples=1 00:13:21.385 iops : min= 2744, max= 2744, avg=2744.00, stdev= 0.00, samples=1 00:13:21.385 lat (usec) : 250=92.84%, 500=6.95%, 750=0.17%, 1000=0.02% 00:13:21.385 lat (msec) : 2=0.02% 00:13:21.385 cpu : usr=4.00%, sys=6.40%, ctx=4780, majf=0, minf=1 00:13:21.385 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:21.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:21.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:21.385 issued rwts: total=2218,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:21.385 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:21.385 job3: (groupid=0, jobs=1): err= 0: pid=558453: Fri Jul 12 11:17:47 2024 00:13:21.385 read: IOPS=20, BW=83.7KiB/s (85.7kB/s)(84.0KiB/1004msec) 00:13:21.385 slat (nsec): min=13480, max=35360, avg=19500.19, stdev=6731.55 00:13:21.385 clat (usec): min=40961, max=42008, avg=41793.29, stdev=381.66 00:13:21.385 lat (usec): min=40980, max=42022, avg=41812.79, stdev=382.22 00:13:21.385 clat percentiles (usec): 00:13:21.385 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:13:21.385 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:13:21.385 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:21.385 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:21.385 | 99.99th=[42206] 00:13:21.385 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:13:21.385 slat (nsec): min=7338, max=53040, avg=18608.60, stdev=7921.09 00:13:21.385 clat (usec): min=154, max=309, avg=222.89, stdev=24.63 00:13:21.385 lat (usec): min=162, max=321, avg=241.50, stdev=27.02 00:13:21.385 clat percentiles (usec): 00:13:21.385 | 1.00th=[ 161], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 202], 00:13:21.385 | 30.00th=[ 210], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 231], 00:13:21.385 | 70.00th=[ 237], 80.00th=[ 245], 90.00th=[ 253], 95.00th=[ 260], 00:13:21.385 | 99.00th=[ 277], 99.50th=[ 289], 99.90th=[ 310], 99.95th=[ 310], 00:13:21.385 | 99.99th=[ 310] 00:13:21.385 bw ( KiB/s): min= 4096, max= 4096, per=25.73%, avg=4096.00, stdev= 0.00, samples=1 00:13:21.385 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:21.385 lat (usec) : 250=84.05%, 500=12.01% 00:13:21.385 lat (msec) : 50=3.94% 00:13:21.385 cpu : usr=0.60%, sys=1.30%, ctx=533, majf=0, minf=2 00:13:21.385 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:21.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:21.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:21.385 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:21.385 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:21.385 00:13:21.385 Run status group 0 (all jobs): 00:13:21.385 READ: bw=9073KiB/s (9291kB/s), 83.7KiB/s-8863KiB/s (85.7kB/s-9076kB/s), io=9336KiB (9560kB), run=1001-1029msec 00:13:21.385 WRITE: bw=15.5MiB/s (16.3MB/s), 1990KiB/s-9.99MiB/s (2038kB/s-10.5MB/s), io=16.0MiB (16.8MB), run=1001-1029msec 00:13:21.385 00:13:21.385 Disk stats (read/write): 00:13:21.385 nvme0n1: ios=118/512, merge=0/0, ticks=753/110, in_queue=863, util=90.48% 00:13:21.385 nvme0n2: ios=40/512, merge=0/0, ticks=1670/98, in_queue=1768, util=97.96% 00:13:21.385 nvme0n3: ios=1987/2048, merge=0/0, ticks=1360/320, in_queue=1680, util=97.80% 00:13:21.385 nvme0n4: ios=17/512, merge=0/0, ticks=712/109, in_queue=821, util=89.65% 00:13:21.385 11:17:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:21.385 [global] 00:13:21.385 thread=1 00:13:21.385 invalidate=1 00:13:21.385 rw=randwrite 00:13:21.385 time_based=1 00:13:21.385 runtime=1 00:13:21.385 ioengine=libaio 00:13:21.385 direct=1 00:13:21.385 bs=4096 00:13:21.385 iodepth=1 00:13:21.385 norandommap=0 00:13:21.385 numjobs=1 00:13:21.385 00:13:21.385 verify_dump=1 00:13:21.385 verify_backlog=512 00:13:21.385 verify_state_save=0 00:13:21.385 do_verify=1 00:13:21.385 verify=crc32c-intel 00:13:21.385 [job0] 00:13:21.385 filename=/dev/nvme0n1 00:13:21.385 [job1] 00:13:21.385 filename=/dev/nvme0n2 00:13:21.385 [job2] 00:13:21.385 filename=/dev/nvme0n3 00:13:21.385 [job3] 00:13:21.385 filename=/dev/nvme0n4 00:13:21.385 Could not set queue depth (nvme0n1) 00:13:21.385 Could not set queue depth (nvme0n2) 00:13:21.385 Could not set queue depth (nvme0n3) 00:13:21.385 Could not set queue depth (nvme0n4) 00:13:21.643 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:21.643 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:21.643 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:21.643 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:21.643 fio-3.35 00:13:21.643 Starting 4 threads 00:13:23.048 00:13:23.048 job0: (groupid=0, jobs=1): err= 0: pid=558677: Fri Jul 12 11:17:48 2024 00:13:23.048 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:13:23.048 slat (nsec): min=5603, max=45229, avg=13208.98, stdev=4662.48 00:13:23.048 clat (usec): min=171, max=546, avg=227.21, stdev=26.54 00:13:23.048 lat (usec): min=177, max=555, avg=240.42, stdev=28.40 00:13:23.048 clat percentiles (usec): 00:13:23.048 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 206], 00:13:23.048 | 30.00th=[ 215], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 231], 00:13:23.048 | 70.00th=[ 237], 80.00th=[ 243], 90.00th=[ 251], 95.00th=[ 269], 00:13:23.048 | 99.00th=[ 306], 99.50th=[ 330], 99.90th=[ 400], 99.95th=[ 449], 00:13:23.048 | 99.99th=[ 545] 00:13:23.048 write: IOPS=2486, BW=9946KiB/s (10.2MB/s)(9956KiB/1001msec); 0 zone resets 00:13:23.048 slat (nsec): min=6135, max=54540, avg=17099.37, stdev=5867.17 00:13:23.048 clat (usec): min=137, max=386, avg=178.57, stdev=24.37 00:13:23.048 lat (usec): min=147, max=395, avg=195.67, stdev=25.65 00:13:23.048 clat percentiles (usec): 00:13:23.048 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 155], 20.00th=[ 165], 00:13:23.048 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 180], 00:13:23.048 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 198], 95.00th=[ 217], 00:13:23.048 | 99.00th=[ 289], 99.50th=[ 302], 99.90th=[ 330], 99.95th=[ 330], 00:13:23.048 | 99.99th=[ 388] 00:13:23.048 bw ( KiB/s): min=10824, max=10824, per=61.19%, avg=10824.00, stdev= 0.00, samples=1 00:13:23.048 iops : min= 2706, max= 2706, avg=2706.00, stdev= 0.00, samples=1 00:13:23.048 lat (usec) : 250=93.32%, 500=6.66%, 750=0.02% 00:13:23.048 cpu : usr=6.60%, sys=8.30%, ctx=4537, majf=0, minf=2 00:13:23.048 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:23.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.048 issued rwts: total=2048,2489,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:23.048 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:23.048 job1: (groupid=0, jobs=1): err= 0: pid=558678: Fri Jul 12 11:17:48 2024 00:13:23.048 read: IOPS=963, BW=3853KiB/s (3945kB/s)(3868KiB/1004msec) 00:13:23.048 slat (nsec): min=6389, max=58196, avg=13957.64, stdev=5618.28 00:13:23.048 clat (usec): min=210, max=41991, avg=779.98, stdev=4527.69 00:13:23.048 lat (usec): min=227, max=42008, avg=793.94, stdev=4528.82 00:13:23.048 clat percentiles (usec): 00:13:23.048 | 1.00th=[ 223], 5.00th=[ 229], 10.00th=[ 231], 20.00th=[ 237], 00:13:23.048 | 30.00th=[ 245], 40.00th=[ 258], 50.00th=[ 273], 60.00th=[ 277], 00:13:23.048 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 302], 95.00th=[ 383], 00:13:23.048 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:13:23.048 | 99.99th=[42206] 00:13:23.048 write: IOPS=1019, BW=4080KiB/s (4178kB/s)(4096KiB/1004msec); 0 zone resets 00:13:23.048 slat (nsec): min=5990, max=66486, avg=15996.10, stdev=7956.86 00:13:23.048 clat (usec): min=135, max=483, avg=205.94, stdev=51.53 00:13:23.048 lat (usec): min=142, max=504, avg=221.94, stdev=55.12 00:13:23.048 clat percentiles (usec): 00:13:23.048 | 1.00th=[ 145], 5.00th=[ 155], 10.00th=[ 161], 20.00th=[ 172], 00:13:23.048 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 194], 00:13:23.048 | 70.00th=[ 212], 80.00th=[ 233], 90.00th=[ 293], 95.00th=[ 310], 00:13:23.048 | 99.00th=[ 371], 99.50th=[ 375], 99.90th=[ 453], 99.95th=[ 482], 00:13:23.048 | 99.99th=[ 482] 00:13:23.048 bw ( KiB/s): min= 3912, max= 4280, per=23.16%, avg=4096.00, stdev=260.22, samples=2 00:13:23.048 iops : min= 978, max= 1070, avg=1024.00, stdev=65.05, samples=2 00:13:23.048 lat (usec) : 250=60.37%, 500=38.02%, 750=1.00% 00:13:23.048 lat (msec) : 50=0.60% 00:13:23.048 cpu : usr=1.99%, sys=4.09%, ctx=1991, majf=0, minf=1 00:13:23.048 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:23.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.048 issued rwts: total=967,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:23.048 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:23.048 job2: (groupid=0, jobs=1): err= 0: pid=558679: Fri Jul 12 11:17:48 2024 00:13:23.048 read: IOPS=21, BW=86.8KiB/s (88.9kB/s)(88.0KiB/1014msec) 00:13:23.048 slat (nsec): min=13436, max=34862, avg=25573.55, stdev=8518.73 00:13:23.048 clat (usec): min=40924, max=41944, avg=41049.94, stdev=262.79 00:13:23.048 lat (usec): min=40958, max=41978, avg=41075.51, stdev=262.71 00:13:23.048 clat percentiles (usec): 00:13:23.048 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:23.048 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:23.048 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:13:23.048 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:23.048 | 99.99th=[42206] 00:13:23.048 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:13:23.048 slat (nsec): min=6767, max=33743, avg=12688.06, stdev=5956.76 00:13:23.048 clat (usec): min=155, max=1885, avg=194.35, stdev=77.28 00:13:23.048 lat (usec): min=163, max=1903, avg=207.03, stdev=77.99 00:13:23.048 clat percentiles (usec): 00:13:23.048 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 176], 00:13:23.048 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 194], 00:13:23.048 | 70.00th=[ 198], 80.00th=[ 204], 90.00th=[ 215], 95.00th=[ 221], 00:13:23.048 | 99.00th=[ 239], 99.50th=[ 258], 99.90th=[ 1893], 99.95th=[ 1893], 00:13:23.048 | 99.99th=[ 1893] 00:13:23.048 bw ( KiB/s): min= 4096, max= 4096, per=23.16%, avg=4096.00, stdev= 0.00, samples=1 00:13:23.048 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:23.048 lat (usec) : 250=95.13%, 500=0.56% 00:13:23.048 lat (msec) : 2=0.19%, 50=4.12% 00:13:23.048 cpu : usr=0.30%, sys=0.69%, ctx=535, majf=0, minf=1 00:13:23.048 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:23.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.048 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:23.048 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:23.048 job3: (groupid=0, jobs=1): err= 0: pid=558680: Fri Jul 12 11:17:48 2024 00:13:23.048 read: IOPS=21, BW=85.8KiB/s (87.8kB/s)(88.0KiB/1026msec) 00:13:23.048 slat (nsec): min=8169, max=36285, avg=26543.32, stdev=9733.20 00:13:23.048 clat (usec): min=40811, max=41989, avg=41100.21, stdev=347.68 00:13:23.048 lat (usec): min=40846, max=42003, avg=41126.76, stdev=347.67 00:13:23.048 clat percentiles (usec): 00:13:23.048 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:23.048 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:23.048 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:13:23.048 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:23.048 | 99.99th=[42206] 00:13:23.048 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:13:23.048 slat (nsec): min=7333, max=48709, avg=14761.54, stdev=7202.03 00:13:23.048 clat (usec): min=149, max=406, avg=217.31, stdev=26.63 00:13:23.048 lat (usec): min=156, max=414, avg=232.07, stdev=29.47 00:13:23.049 clat percentiles (usec): 00:13:23.049 | 1.00th=[ 157], 5.00th=[ 169], 10.00th=[ 182], 20.00th=[ 200], 00:13:23.049 | 30.00th=[ 206], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 225], 00:13:23.049 | 70.00th=[ 231], 80.00th=[ 237], 90.00th=[ 247], 95.00th=[ 258], 00:13:23.049 | 99.00th=[ 277], 99.50th=[ 281], 99.90th=[ 408], 99.95th=[ 408], 00:13:23.049 | 99.99th=[ 408] 00:13:23.049 bw ( KiB/s): min= 4096, max= 4096, per=23.16%, avg=4096.00, stdev= 0.00, samples=1 00:13:23.049 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:23.049 lat (usec) : 250=87.08%, 500=8.80% 00:13:23.049 lat (msec) : 50=4.12% 00:13:23.049 cpu : usr=0.78%, sys=0.68%, ctx=534, majf=0, minf=1 00:13:23.049 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:23.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.049 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:23.049 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:23.049 00:13:23.049 Run status group 0 (all jobs): 00:13:23.049 READ: bw=11.6MiB/s (12.2MB/s), 85.8KiB/s-8184KiB/s (87.8kB/s-8380kB/s), io=11.9MiB (12.5MB), run=1001-1026msec 00:13:23.049 WRITE: bw=17.3MiB/s (18.1MB/s), 1996KiB/s-9946KiB/s (2044kB/s-10.2MB/s), io=17.7MiB (18.6MB), run=1001-1026msec 00:13:23.049 00:13:23.049 Disk stats (read/write): 00:13:23.049 nvme0n1: ios=1864/2048, merge=0/0, ticks=424/327, in_queue=751, util=86.97% 00:13:23.049 nvme0n2: ios=983/1024, merge=0/0, ticks=610/208, in_queue=818, util=87.09% 00:13:23.049 nvme0n3: ios=41/512, merge=0/0, ticks=1686/97, in_queue=1783, util=96.76% 00:13:23.049 nvme0n4: ios=17/512, merge=0/0, ticks=698/105, in_queue=803, util=89.68% 00:13:23.049 11:17:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:23.049 [global] 00:13:23.049 thread=1 00:13:23.049 invalidate=1 00:13:23.049 rw=write 00:13:23.049 time_based=1 00:13:23.049 runtime=1 00:13:23.049 ioengine=libaio 00:13:23.049 direct=1 00:13:23.049 bs=4096 00:13:23.049 iodepth=128 00:13:23.049 norandommap=0 00:13:23.049 numjobs=1 00:13:23.049 00:13:23.049 verify_dump=1 00:13:23.049 verify_backlog=512 00:13:23.049 verify_state_save=0 00:13:23.049 do_verify=1 00:13:23.049 verify=crc32c-intel 00:13:23.049 [job0] 00:13:23.049 filename=/dev/nvme0n1 00:13:23.049 [job1] 00:13:23.049 filename=/dev/nvme0n2 00:13:23.049 [job2] 00:13:23.049 filename=/dev/nvme0n3 00:13:23.049 [job3] 00:13:23.049 filename=/dev/nvme0n4 00:13:23.049 Could not set queue depth (nvme0n1) 00:13:23.049 Could not set queue depth (nvme0n2) 00:13:23.049 Could not set queue depth (nvme0n3) 00:13:23.049 Could not set queue depth (nvme0n4) 00:13:23.049 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:23.049 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:23.049 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:23.049 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:23.049 fio-3.35 00:13:23.049 Starting 4 threads 00:13:24.448 00:13:24.448 job0: (groupid=0, jobs=1): err= 0: pid=558916: Fri Jul 12 11:17:50 2024 00:13:24.448 read: IOPS=5613, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:13:24.448 slat (usec): min=3, max=6206, avg=84.99, stdev=461.79 00:13:24.448 clat (usec): min=1034, max=20812, avg=11146.14, stdev=2186.50 00:13:24.448 lat (usec): min=2930, max=20829, avg=11231.14, stdev=2216.97 00:13:24.448 clat percentiles (usec): 00:13:24.448 | 1.00th=[ 6128], 5.00th=[ 8291], 10.00th=[ 9110], 20.00th=[ 9765], 00:13:24.448 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10552], 60.00th=[10683], 00:13:24.448 | 70.00th=[11731], 80.00th=[12649], 90.00th=[14746], 95.00th=[15270], 00:13:24.448 | 99.00th=[17695], 99.50th=[18220], 99.90th=[19268], 99.95th=[19268], 00:13:24.448 | 99.99th=[20841] 00:13:24.448 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:13:24.448 slat (usec): min=4, max=6908, avg=80.94, stdev=405.53 00:13:24.448 clat (usec): min=6090, max=21651, avg=11363.37, stdev=2040.96 00:13:24.448 lat (usec): min=6107, max=21675, avg=11444.31, stdev=2077.55 00:13:24.448 clat percentiles (usec): 00:13:24.448 | 1.00th=[ 6980], 5.00th=[ 8717], 10.00th=[ 9503], 20.00th=[10159], 00:13:24.448 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10683], 60.00th=[10945], 00:13:24.448 | 70.00th=[11207], 80.00th=[13566], 90.00th=[14746], 95.00th=[15139], 00:13:24.448 | 99.00th=[16188], 99.50th=[18482], 99.90th=[20841], 99.95th=[21103], 00:13:24.448 | 99.99th=[21627] 00:13:24.448 bw ( KiB/s): min=20488, max=24568, per=32.93%, avg=22528.00, stdev=2885.00, samples=2 00:13:24.448 iops : min= 5122, max= 6142, avg=5632.00, stdev=721.25, samples=2 00:13:24.448 lat (msec) : 2=0.01%, 4=0.20%, 10=18.89%, 20=80.83%, 50=0.08% 00:13:24.448 cpu : usr=7.89%, sys=13.59%, ctx=560, majf=0, minf=1 00:13:24.448 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:13:24.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:24.448 issued rwts: total=5625,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:24.448 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:24.448 job1: (groupid=0, jobs=1): err= 0: pid=558917: Fri Jul 12 11:17:50 2024 00:13:24.448 read: IOPS=3029, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1014msec) 00:13:24.448 slat (usec): min=3, max=15203, avg=156.77, stdev=1059.63 00:13:24.448 clat (usec): min=5077, max=35682, avg=19243.44, stdev=6110.51 00:13:24.448 lat (usec): min=5085, max=35728, avg=19400.20, stdev=6194.17 00:13:24.448 clat percentiles (usec): 00:13:24.448 | 1.00th=[ 7898], 5.00th=[10028], 10.00th=[11076], 20.00th=[11469], 00:13:24.448 | 30.00th=[15795], 40.00th=[19792], 50.00th=[20579], 60.00th=[21627], 00:13:24.448 | 70.00th=[22414], 80.00th=[22676], 90.00th=[26346], 95.00th=[31327], 00:13:24.448 | 99.00th=[33162], 99.50th=[33162], 99.90th=[33424], 99.95th=[34341], 00:13:24.448 | 99.99th=[35914] 00:13:24.448 write: IOPS=3240, BW=12.7MiB/s (13.3MB/s)(12.8MiB/1014msec); 0 zone resets 00:13:24.448 slat (usec): min=4, max=10048, avg=147.07, stdev=613.13 00:13:24.448 clat (usec): min=1511, max=71624, avg=21109.88, stdev=11302.62 00:13:24.448 lat (usec): min=1525, max=71635, avg=21256.96, stdev=11375.94 00:13:24.448 clat percentiles (usec): 00:13:24.448 | 1.00th=[ 3228], 5.00th=[ 7635], 10.00th=[11207], 20.00th=[11863], 00:13:24.448 | 30.00th=[12256], 40.00th=[20579], 50.00th=[22676], 60.00th=[22938], 00:13:24.448 | 70.00th=[23200], 80.00th=[23725], 90.00th=[30802], 95.00th=[45876], 00:13:24.448 | 99.00th=[66323], 99.50th=[66847], 99.90th=[71828], 99.95th=[71828], 00:13:24.448 | 99.99th=[71828] 00:13:24.448 bw ( KiB/s): min= 9288, max=15984, per=18.47%, avg=12636.00, stdev=4734.79, samples=2 00:13:24.448 iops : min= 2322, max= 3996, avg=3159.00, stdev=1183.70, samples=2 00:13:24.448 lat (msec) : 2=0.30%, 4=0.47%, 10=5.98%, 20=33.75%, 50=57.74% 00:13:24.448 lat (msec) : 100=1.76% 00:13:24.448 cpu : usr=4.84%, sys=6.91%, ctx=457, majf=0, minf=1 00:13:24.448 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:13:24.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:24.448 issued rwts: total=3072,3286,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:24.448 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:24.448 job2: (groupid=0, jobs=1): err= 0: pid=558920: Fri Jul 12 11:17:50 2024 00:13:24.448 read: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec) 00:13:24.448 slat (usec): min=2, max=20977, avg=163.03, stdev=1185.45 00:13:24.448 clat (usec): min=4799, max=51537, avg=20474.89, stdev=8782.04 00:13:24.448 lat (usec): min=4804, max=51552, avg=20637.92, stdev=8862.54 00:13:24.448 clat percentiles (usec): 00:13:24.448 | 1.00th=[ 8586], 5.00th=[11994], 10.00th=[13042], 20.00th=[14353], 00:13:24.448 | 30.00th=[15008], 40.00th=[15795], 50.00th=[16712], 60.00th=[21890], 00:13:24.448 | 70.00th=[22414], 80.00th=[23462], 90.00th=[34866], 95.00th=[42206], 00:13:24.448 | 99.00th=[49021], 99.50th=[50070], 99.90th=[51643], 99.95th=[51643], 00:13:24.448 | 99.99th=[51643] 00:13:24.448 write: IOPS=3479, BW=13.6MiB/s (14.3MB/s)(13.7MiB/1009msec); 0 zone resets 00:13:24.448 slat (usec): min=3, max=11164, avg=126.57, stdev=609.20 00:13:24.448 clat (usec): min=3556, max=58052, avg=18545.20, stdev=9244.04 00:13:24.448 lat (usec): min=3575, max=58062, avg=18671.77, stdev=9308.95 00:13:24.448 clat percentiles (usec): 00:13:24.448 | 1.00th=[ 4359], 5.00th=[ 8717], 10.00th=[10290], 20.00th=[12518], 00:13:24.448 | 30.00th=[13042], 40.00th=[13566], 50.00th=[15926], 60.00th=[21627], 00:13:24.448 | 70.00th=[22676], 80.00th=[22938], 90.00th=[23725], 95.00th=[31065], 00:13:24.448 | 99.00th=[57934], 99.50th=[57934], 99.90th=[57934], 99.95th=[57934], 00:13:24.448 | 99.99th=[57934] 00:13:24.448 bw ( KiB/s): min=11216, max=15856, per=19.79%, avg=13536.00, stdev=3280.98, samples=2 00:13:24.448 iops : min= 2804, max= 3964, avg=3384.00, stdev=820.24, samples=2 00:13:24.448 lat (msec) : 4=0.24%, 10=6.76%, 20=47.80%, 50=43.55%, 100=1.64% 00:13:24.448 cpu : usr=3.47%, sys=8.73%, ctx=383, majf=0, minf=1 00:13:24.448 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:13:24.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:24.448 issued rwts: total=3072,3511,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:24.448 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:24.448 job3: (groupid=0, jobs=1): err= 0: pid=558922: Fri Jul 12 11:17:50 2024 00:13:24.448 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:13:24.448 slat (usec): min=3, max=6264, avg=99.52, stdev=579.59 00:13:24.448 clat (usec): min=7931, max=19619, avg=13264.55, stdev=1560.87 00:13:24.448 lat (usec): min=7953, max=20157, avg=13364.07, stdev=1631.98 00:13:24.448 clat percentiles (usec): 00:13:24.448 | 1.00th=[ 8848], 5.00th=[10421], 10.00th=[11469], 20.00th=[12387], 00:13:24.448 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13304], 60.00th=[13566], 00:13:24.448 | 70.00th=[13698], 80.00th=[14091], 90.00th=[15008], 95.00th=[15533], 00:13:24.448 | 99.00th=[18220], 99.50th=[18744], 99.90th=[19268], 99.95th=[19268], 00:13:24.448 | 99.99th=[19530] 00:13:24.448 write: IOPS=4881, BW=19.1MiB/s (20.0MB/s)(19.2MiB/1006msec); 0 zone resets 00:13:24.448 slat (usec): min=4, max=7452, avg=98.87, stdev=544.91 00:13:24.448 clat (usec): min=5061, max=22100, avg=13431.27, stdev=1842.53 00:13:24.448 lat (usec): min=5818, max=22120, avg=13530.14, stdev=1903.12 00:13:24.448 clat percentiles (usec): 00:13:24.448 | 1.00th=[ 8094], 5.00th=[10552], 10.00th=[11600], 20.00th=[12387], 00:13:24.448 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13173], 60.00th=[13566], 00:13:24.448 | 70.00th=[13960], 80.00th=[14746], 90.00th=[15533], 95.00th=[16450], 00:13:24.448 | 99.00th=[18744], 99.50th=[19530], 99.90th=[21627], 99.95th=[21890], 00:13:24.448 | 99.99th=[22152] 00:13:24.448 bw ( KiB/s): min=18392, max=19880, per=27.98%, avg=19136.00, stdev=1052.17, samples=2 00:13:24.448 iops : min= 4598, max= 4970, avg=4784.00, stdev=263.04, samples=2 00:13:24.448 lat (msec) : 10=3.83%, 20=95.91%, 50=0.25% 00:13:24.448 cpu : usr=7.86%, sys=10.45%, ctx=416, majf=0, minf=1 00:13:24.448 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:13:24.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:24.448 issued rwts: total=4608,4911,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:24.448 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:24.448 00:13:24.448 Run status group 0 (all jobs): 00:13:24.448 READ: bw=63.1MiB/s (66.2MB/s), 11.8MiB/s-21.9MiB/s (12.4MB/s-23.0MB/s), io=64.0MiB (67.1MB), run=1002-1014msec 00:13:24.448 WRITE: bw=66.8MiB/s (70.0MB/s), 12.7MiB/s-22.0MiB/s (13.3MB/s-23.0MB/s), io=67.7MiB (71.0MB), run=1002-1014msec 00:13:24.448 00:13:24.448 Disk stats (read/write): 00:13:24.448 nvme0n1: ios=4658/4790, merge=0/0, ticks=24715/24790, in_queue=49505, util=86.87% 00:13:24.448 nvme0n2: ios=2598/2703, merge=0/0, ticks=50227/51431, in_queue=101658, util=100.00% 00:13:24.448 nvme0n3: ios=2703/3072, merge=0/0, ticks=54360/48613, in_queue=102973, util=97.70% 00:13:24.448 nvme0n4: ios=3823/4096, merge=0/0, ticks=24939/25375, in_queue=50314, util=98.11% 00:13:24.448 11:17:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:24.448 [global] 00:13:24.448 thread=1 00:13:24.448 invalidate=1 00:13:24.448 rw=randwrite 00:13:24.448 time_based=1 00:13:24.448 runtime=1 00:13:24.449 ioengine=libaio 00:13:24.449 direct=1 00:13:24.449 bs=4096 00:13:24.449 iodepth=128 00:13:24.449 norandommap=0 00:13:24.449 numjobs=1 00:13:24.449 00:13:24.449 verify_dump=1 00:13:24.449 verify_backlog=512 00:13:24.449 verify_state_save=0 00:13:24.449 do_verify=1 00:13:24.449 verify=crc32c-intel 00:13:24.449 [job0] 00:13:24.449 filename=/dev/nvme0n1 00:13:24.449 [job1] 00:13:24.449 filename=/dev/nvme0n2 00:13:24.449 [job2] 00:13:24.449 filename=/dev/nvme0n3 00:13:24.449 [job3] 00:13:24.449 filename=/dev/nvme0n4 00:13:24.449 Could not set queue depth (nvme0n1) 00:13:24.449 Could not set queue depth (nvme0n2) 00:13:24.449 Could not set queue depth (nvme0n3) 00:13:24.449 Could not set queue depth (nvme0n4) 00:13:24.449 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:24.449 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:24.449 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:24.449 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:24.449 fio-3.35 00:13:24.449 Starting 4 threads 00:13:25.824 00:13:25.824 job0: (groupid=0, jobs=1): err= 0: pid=559265: Fri Jul 12 11:17:51 2024 00:13:25.824 read: IOPS=5409, BW=21.1MiB/s (22.2MB/s)(22.1MiB/1047msec) 00:13:25.824 slat (usec): min=4, max=6054, avg=82.86, stdev=483.28 00:13:25.824 clat (usec): min=6733, max=48774, avg=10903.20, stdev=3104.06 00:13:25.824 lat (usec): min=6765, max=48783, avg=10986.06, stdev=3133.61 00:13:25.824 clat percentiles (usec): 00:13:25.824 | 1.00th=[ 7373], 5.00th=[ 8455], 10.00th=[ 9634], 20.00th=[10159], 00:13:25.824 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10552], 60.00th=[10683], 00:13:25.824 | 70.00th=[10945], 80.00th=[11338], 90.00th=[12125], 95.00th=[13304], 00:13:25.824 | 99.00th=[15533], 99.50th=[48497], 99.90th=[48497], 99.95th=[48497], 00:13:25.824 | 99.99th=[49021] 00:13:25.824 write: IOPS=5868, BW=22.9MiB/s (24.0MB/s)(24.0MiB/1047msec); 0 zone resets 00:13:25.824 slat (usec): min=4, max=6223, avg=75.58, stdev=355.68 00:13:25.824 clat (usec): min=4788, max=55273, avg=11518.54, stdev=5390.75 00:13:25.824 lat (usec): min=4798, max=60779, avg=11594.11, stdev=5403.95 00:13:25.824 clat percentiles (usec): 00:13:25.824 | 1.00th=[ 6521], 5.00th=[ 8455], 10.00th=[ 9634], 20.00th=[10159], 00:13:25.824 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:13:25.824 | 70.00th=[11076], 80.00th=[11600], 90.00th=[12780], 95.00th=[13829], 00:13:25.824 | 99.00th=[54264], 99.50th=[54789], 99.90th=[55313], 99.95th=[55313], 00:13:25.824 | 99.99th=[55313] 00:13:25.824 bw ( KiB/s): min=23808, max=24576, per=35.10%, avg=24192.00, stdev=543.06, samples=2 00:13:25.824 iops : min= 5952, max= 6144, avg=6048.00, stdev=135.76, samples=2 00:13:25.824 lat (msec) : 10=14.60%, 20=84.32%, 50=0.54%, 100=0.53% 00:13:25.824 cpu : usr=8.41%, sys=13.19%, ctx=622, majf=0, minf=1 00:13:25.824 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:13:25.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.824 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:25.824 issued rwts: total=5664,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:25.824 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:25.824 job1: (groupid=0, jobs=1): err= 0: pid=559266: Fri Jul 12 11:17:51 2024 00:13:25.824 read: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec) 00:13:25.824 slat (usec): min=2, max=14491, avg=108.16, stdev=714.85 00:13:25.824 clat (usec): min=9806, max=53304, avg=15272.85, stdev=4624.99 00:13:25.824 lat (usec): min=9815, max=53313, avg=15381.02, stdev=4681.35 00:13:25.824 clat percentiles (usec): 00:13:25.824 | 1.00th=[ 9896], 5.00th=[10683], 10.00th=[10945], 20.00th=[11994], 00:13:25.824 | 30.00th=[12911], 40.00th=[13960], 50.00th=[14353], 60.00th=[14615], 00:13:25.824 | 70.00th=[15533], 80.00th=[17695], 90.00th=[21627], 95.00th=[23987], 00:13:25.824 | 99.00th=[29230], 99.50th=[34341], 99.90th=[53216], 99.95th=[53216], 00:13:25.824 | 99.99th=[53216] 00:13:25.824 write: IOPS=3751, BW=14.7MiB/s (15.4MB/s)(14.8MiB/1009msec); 0 zone resets 00:13:25.824 slat (usec): min=3, max=15746, avg=138.43, stdev=829.98 00:13:25.824 clat (usec): min=805, max=52029, avg=19365.78, stdev=10454.58 00:13:25.824 lat (usec): min=818, max=52055, avg=19504.21, stdev=10538.04 00:13:25.824 clat percentiles (usec): 00:13:25.824 | 1.00th=[ 3884], 5.00th=[ 6915], 10.00th=[ 8586], 20.00th=[11731], 00:13:25.824 | 30.00th=[12649], 40.00th=[13042], 50.00th=[15795], 60.00th=[19792], 00:13:25.824 | 70.00th=[22414], 80.00th=[26608], 90.00th=[36963], 95.00th=[41157], 00:13:25.824 | 99.00th=[49546], 99.50th=[51119], 99.90th=[52167], 99.95th=[52167], 00:13:25.824 | 99.99th=[52167] 00:13:25.824 bw ( KiB/s): min=13768, max=15496, per=21.23%, avg=14632.00, stdev=1221.88, samples=2 00:13:25.824 iops : min= 3442, max= 3874, avg=3658.00, stdev=305.47, samples=2 00:13:25.824 lat (usec) : 1000=0.09% 00:13:25.824 lat (msec) : 4=0.60%, 10=7.63%, 20=64.74%, 50=26.30%, 100=0.64% 00:13:25.824 cpu : usr=5.65%, sys=7.04%, ctx=294, majf=0, minf=1 00:13:25.824 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:13:25.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.824 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:25.824 issued rwts: total=3584,3785,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:25.824 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:25.824 job2: (groupid=0, jobs=1): err= 0: pid=559267: Fri Jul 12 11:17:51 2024 00:13:25.824 read: IOPS=4807, BW=18.8MiB/s (19.7MB/s)(18.8MiB/1002msec) 00:13:25.824 slat (usec): min=2, max=20729, avg=103.12, stdev=763.54 00:13:25.824 clat (usec): min=807, max=32803, avg=13277.34, stdev=3301.92 00:13:25.824 lat (usec): min=3185, max=32819, avg=13380.46, stdev=3343.58 00:13:25.824 clat percentiles (usec): 00:13:25.824 | 1.00th=[ 5080], 5.00th=[ 9241], 10.00th=[10552], 20.00th=[11731], 00:13:25.824 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12387], 60.00th=[12780], 00:13:25.824 | 70.00th=[13566], 80.00th=[14353], 90.00th=[17957], 95.00th=[21365], 00:13:25.824 | 99.00th=[22938], 99.50th=[23200], 99.90th=[24773], 99.95th=[24773], 00:13:25.824 | 99.99th=[32900] 00:13:25.824 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:13:25.824 slat (usec): min=4, max=17607, avg=84.27, stdev=546.58 00:13:25.824 clat (usec): min=317, max=42902, avg=12335.70, stdev=3460.72 00:13:25.824 lat (usec): min=335, max=42910, avg=12419.97, stdev=3510.41 00:13:25.824 clat percentiles (usec): 00:13:25.824 | 1.00th=[ 3523], 5.00th=[ 6194], 10.00th=[ 8094], 20.00th=[11469], 00:13:25.824 | 30.00th=[11994], 40.00th=[12387], 50.00th=[12649], 60.00th=[12649], 00:13:25.824 | 70.00th=[12911], 80.00th=[13304], 90.00th=[14746], 95.00th=[18482], 00:13:25.824 | 99.00th=[22676], 99.50th=[25822], 99.90th=[38536], 99.95th=[38536], 00:13:25.824 | 99.99th=[42730] 00:13:25.824 bw ( KiB/s): min=20480, max=20480, per=29.72%, avg=20480.00, stdev= 0.00, samples=2 00:13:25.824 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:13:25.824 lat (usec) : 500=0.02%, 1000=0.01% 00:13:25.824 lat (msec) : 2=0.10%, 4=1.10%, 10=10.72%, 20=83.72%, 50=4.34% 00:13:25.824 cpu : usr=4.50%, sys=7.49%, ctx=552, majf=0, minf=1 00:13:25.824 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:25.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.824 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:25.824 issued rwts: total=4817,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:25.824 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:25.824 job3: (groupid=0, jobs=1): err= 0: pid=559268: Fri Jul 12 11:17:51 2024 00:13:25.824 read: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec) 00:13:25.824 slat (usec): min=4, max=14258, avg=146.22, stdev=949.41 00:13:25.824 clat (usec): min=9789, max=38745, avg=18272.69, stdev=5223.23 00:13:25.824 lat (usec): min=10556, max=43889, avg=18418.91, stdev=5307.96 00:13:25.824 clat percentiles (usec): 00:13:25.824 | 1.00th=[11207], 5.00th=[11600], 10.00th=[13173], 20.00th=[14484], 00:13:25.824 | 30.00th=[15270], 40.00th=[15795], 50.00th=[16188], 60.00th=[17171], 00:13:25.824 | 70.00th=[18744], 80.00th=[23987], 90.00th=[26608], 95.00th=[29230], 00:13:25.824 | 99.00th=[33424], 99.50th=[34866], 99.90th=[37487], 99.95th=[37487], 00:13:25.824 | 99.99th=[38536] 00:13:25.824 write: IOPS=2961, BW=11.6MiB/s (12.1MB/s)(11.7MiB/1010msec); 0 zone resets 00:13:25.824 slat (usec): min=5, max=10481, avg=198.61, stdev=811.23 00:13:25.824 clat (usec): min=5442, max=54689, avg=27053.92, stdev=11790.12 00:13:25.824 lat (usec): min=9527, max=54701, avg=27252.53, stdev=11859.08 00:13:25.824 clat percentiles (usec): 00:13:25.824 | 1.00th=[11207], 5.00th=[13173], 10.00th=[13698], 20.00th=[14353], 00:13:25.824 | 30.00th=[19530], 40.00th=[21890], 50.00th=[23200], 60.00th=[26608], 00:13:25.824 | 70.00th=[34866], 80.00th=[40109], 90.00th=[45351], 95.00th=[49546], 00:13:25.824 | 99.00th=[52691], 99.50th=[53216], 99.90th=[54789], 99.95th=[54789], 00:13:25.824 | 99.99th=[54789] 00:13:25.824 bw ( KiB/s): min=10616, max=12288, per=16.62%, avg=11452.00, stdev=1182.28, samples=2 00:13:25.824 iops : min= 2654, max= 3072, avg=2863.00, stdev=295.57, samples=2 00:13:25.824 lat (msec) : 10=0.16%, 20=49.83%, 50=47.34%, 100=2.67% 00:13:25.824 cpu : usr=4.46%, sys=7.53%, ctx=381, majf=0, minf=1 00:13:25.824 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:13:25.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.824 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:25.824 issued rwts: total=2560,2991,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:25.824 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:25.824 00:13:25.824 Run status group 0 (all jobs): 00:13:25.824 READ: bw=62.0MiB/s (65.0MB/s), 9.90MiB/s-21.1MiB/s (10.4MB/s-22.2MB/s), io=64.9MiB (68.1MB), run=1002-1047msec 00:13:25.824 WRITE: bw=67.3MiB/s (70.6MB/s), 11.6MiB/s-22.9MiB/s (12.1MB/s-24.0MB/s), io=70.5MiB (73.9MB), run=1002-1047msec 00:13:25.824 00:13:25.824 Disk stats (read/write): 00:13:25.824 nvme0n1: ios=4935/5120, merge=0/0, ticks=25131/25032, in_queue=50163, util=98.00% 00:13:25.824 nvme0n2: ios=3086/3072, merge=0/0, ticks=28457/33131, in_queue=61588, util=97.97% 00:13:25.824 nvme0n3: ios=4149/4391, merge=0/0, ticks=50991/46763, in_queue=97754, util=97.91% 00:13:25.824 nvme0n4: ios=2069/2503, merge=0/0, ticks=19428/32855, in_queue=52283, util=97.89% 00:13:25.824 11:17:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:25.824 11:17:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=559402 00:13:25.824 11:17:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:25.824 11:17:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:25.824 [global] 00:13:25.824 thread=1 00:13:25.824 invalidate=1 00:13:25.824 rw=read 00:13:25.824 time_based=1 00:13:25.824 runtime=10 00:13:25.824 ioengine=libaio 00:13:25.824 direct=1 00:13:25.824 bs=4096 00:13:25.824 iodepth=1 00:13:25.824 norandommap=1 00:13:25.824 numjobs=1 00:13:25.824 00:13:25.824 [job0] 00:13:25.824 filename=/dev/nvme0n1 00:13:25.824 [job1] 00:13:25.824 filename=/dev/nvme0n2 00:13:25.824 [job2] 00:13:25.824 filename=/dev/nvme0n3 00:13:25.824 [job3] 00:13:25.824 filename=/dev/nvme0n4 00:13:25.824 Could not set queue depth (nvme0n1) 00:13:25.824 Could not set queue depth (nvme0n2) 00:13:25.824 Could not set queue depth (nvme0n3) 00:13:25.824 Could not set queue depth (nvme0n4) 00:13:26.082 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:26.082 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:26.082 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:26.082 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:26.082 fio-3.35 00:13:26.082 Starting 4 threads 00:13:29.375 11:17:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:29.375 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=9379840, buflen=4096 00:13:29.375 fio: pid=559503, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:29.375 11:17:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:29.375 11:17:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:29.375 11:17:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:29.375 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=54013952, buflen=4096 00:13:29.375 fio: pid=559502, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:29.633 11:17:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:29.633 11:17:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:29.633 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=3444736, buflen=4096 00:13:29.633 fio: pid=559490, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:29.891 11:17:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:29.891 11:17:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:29.891 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=372736, buflen=4096 00:13:29.891 fio: pid=559495, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:29.891 00:13:29.891 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=559490: Fri Jul 12 11:17:55 2024 00:13:29.891 read: IOPS=245, BW=981KiB/s (1005kB/s)(3364KiB/3428msec) 00:13:29.891 slat (nsec): min=6890, max=46731, avg=14237.30, stdev=6672.96 00:13:29.891 clat (usec): min=192, max=42084, avg=4031.63, stdev=11914.60 00:13:29.891 lat (usec): min=200, max=42102, avg=4045.83, stdev=11916.83 00:13:29.891 clat percentiles (usec): 00:13:29.891 | 1.00th=[ 206], 5.00th=[ 225], 10.00th=[ 235], 20.00th=[ 241], 00:13:29.891 | 30.00th=[ 247], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 258], 00:13:29.891 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 306], 95.00th=[41681], 00:13:29.891 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:29.891 | 99.99th=[42206] 00:13:29.891 bw ( KiB/s): min= 96, max= 3808, per=6.23%, avg=1108.00, stdev=1620.40, samples=6 00:13:29.891 iops : min= 24, max= 952, avg=277.00, stdev=405.10, samples=6 00:13:29.891 lat (usec) : 250=40.97%, 500=49.76% 00:13:29.891 lat (msec) : 50=9.14% 00:13:29.891 cpu : usr=0.06%, sys=0.67%, ctx=845, majf=0, minf=1 00:13:29.891 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:29.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.891 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.891 issued rwts: total=842,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.891 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:29.891 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=559495: Fri Jul 12 11:17:55 2024 00:13:29.891 read: IOPS=24, BW=98.7KiB/s (101kB/s)(364KiB/3689msec) 00:13:29.891 slat (nsec): min=9628, max=55363, avg=24292.86, stdev=11404.95 00:13:29.891 clat (usec): min=500, max=42279, avg=40260.60, stdev=5998.86 00:13:29.891 lat (usec): min=515, max=42289, avg=40284.77, stdev=6000.05 00:13:29.891 clat percentiles (usec): 00:13:29.891 | 1.00th=[ 502], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:13:29.891 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:29.891 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:13:29.891 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:29.891 | 99.99th=[42206] 00:13:29.891 bw ( KiB/s): min= 93, max= 112, per=0.56%, avg=99.00, stdev= 6.66, samples=7 00:13:29.891 iops : min= 23, max= 28, avg=24.71, stdev= 1.70, samples=7 00:13:29.891 lat (usec) : 750=2.17% 00:13:29.891 lat (msec) : 50=96.74% 00:13:29.891 cpu : usr=0.11%, sys=0.00%, ctx=94, majf=0, minf=1 00:13:29.891 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:29.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.891 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.891 issued rwts: total=92,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.891 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:29.891 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=559502: Fri Jul 12 11:17:55 2024 00:13:29.891 read: IOPS=4151, BW=16.2MiB/s (17.0MB/s)(51.5MiB/3177msec) 00:13:29.891 slat (usec): min=4, max=8867, avg=10.79, stdev=77.31 00:13:29.891 clat (usec): min=169, max=41320, avg=226.28, stdev=953.55 00:13:29.891 lat (usec): min=174, max=50005, avg=237.08, stdev=985.05 00:13:29.891 clat percentiles (usec): 00:13:29.891 | 1.00th=[ 178], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 190], 00:13:29.891 | 30.00th=[ 194], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 202], 00:13:29.891 | 70.00th=[ 206], 80.00th=[ 212], 90.00th=[ 221], 95.00th=[ 233], 00:13:29.891 | 99.00th=[ 269], 99.50th=[ 293], 99.90th=[ 429], 99.95th=[40633], 00:13:29.891 | 99.99th=[41157] 00:13:29.891 bw ( KiB/s): min=12016, max=19792, per=98.79%, avg=17577.33, stdev=2822.23, samples=6 00:13:29.892 iops : min= 3004, max= 4948, avg=4394.33, stdev=705.56, samples=6 00:13:29.892 lat (usec) : 250=97.18%, 500=2.74% 00:13:29.892 lat (msec) : 4=0.01%, 20=0.02%, 50=0.05% 00:13:29.892 cpu : usr=1.79%, sys=4.75%, ctx=13189, majf=0, minf=1 00:13:29.892 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:29.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.892 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.892 issued rwts: total=13188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.892 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:29.892 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=559503: Fri Jul 12 11:17:55 2024 00:13:29.892 read: IOPS=791, BW=3164KiB/s (3240kB/s)(9160KiB/2895msec) 00:13:29.892 slat (nsec): min=4298, max=53187, avg=9959.99, stdev=6114.69 00:13:29.892 clat (usec): min=179, max=42045, avg=1247.49, stdev=6273.90 00:13:29.892 lat (usec): min=184, max=42064, avg=1257.44, stdev=6275.97 00:13:29.892 clat percentiles (usec): 00:13:29.892 | 1.00th=[ 210], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 233], 00:13:29.892 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 255], 00:13:29.892 | 70.00th=[ 273], 80.00th=[ 289], 90.00th=[ 429], 95.00th=[ 474], 00:13:29.892 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:29.892 | 99.99th=[42206] 00:13:29.892 bw ( KiB/s): min= 96, max= 9488, per=13.39%, avg=2382.40, stdev=4069.19, samples=5 00:13:29.892 iops : min= 24, max= 2372, avg=595.60, stdev=1017.30, samples=5 00:13:29.892 lat (usec) : 250=53.99%, 500=42.60%, 750=0.92%, 1000=0.04% 00:13:29.892 lat (msec) : 2=0.04%, 50=2.36% 00:13:29.892 cpu : usr=0.35%, sys=0.90%, ctx=2292, majf=0, minf=1 00:13:29.892 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:29.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.892 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.892 issued rwts: total=2291,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.892 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:29.892 00:13:29.892 Run status group 0 (all jobs): 00:13:29.892 READ: bw=17.4MiB/s (18.2MB/s), 98.7KiB/s-16.2MiB/s (101kB/s-17.0MB/s), io=64.1MiB (67.2MB), run=2895-3689msec 00:13:29.892 00:13:29.892 Disk stats (read/write): 00:13:29.892 nvme0n1: ios=881/0, merge=0/0, ticks=3898/0, in_queue=3898, util=99.31% 00:13:29.892 nvme0n2: ios=89/0, merge=0/0, ticks=3584/0, in_queue=3584, util=96.46% 00:13:29.892 nvme0n3: ios=13185/0, merge=0/0, ticks=2801/0, in_queue=2801, util=96.50% 00:13:29.892 nvme0n4: ios=2205/0, merge=0/0, ticks=2783/0, in_queue=2783, util=96.74% 00:13:30.149 11:17:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:30.149 11:17:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:30.407 11:17:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:30.407 11:17:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:30.664 11:17:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:30.664 11:17:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:30.922 11:17:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:30.922 11:17:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:31.179 11:17:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:31.179 11:17:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 559402 00:13:31.179 11:17:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:31.179 11:17:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:31.179 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.179 11:17:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:31.179 11:17:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:13:31.179 11:17:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:31.179 11:17:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:31.179 11:17:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:31.179 11:17:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:31.179 11:17:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:13:31.179 11:17:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:31.179 11:17:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:31.179 nvmf hotplug test: fio failed as expected 00:13:31.179 11:17:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:31.436 11:17:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:31.436 11:17:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:31.436 11:17:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:31.436 11:17:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:31.436 11:17:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:31.436 11:17:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:31.436 11:17:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:13:31.436 11:17:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:31.436 11:17:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:13:31.436 11:17:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:31.436 11:17:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:31.436 rmmod nvme_tcp 00:13:31.694 rmmod nvme_fabrics 00:13:31.694 rmmod nvme_keyring 00:13:31.694 11:17:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:31.694 11:17:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:13:31.694 11:17:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:13:31.694 11:17:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 557370 ']' 00:13:31.694 11:17:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 557370 00:13:31.694 11:17:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 557370 ']' 00:13:31.694 11:17:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 557370 00:13:31.694 11:17:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:13:31.694 11:17:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:31.694 11:17:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 557370 00:13:31.694 11:17:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:31.694 11:17:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:31.694 11:17:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 557370' 00:13:31.694 killing process with pid 557370 00:13:31.694 11:17:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 557370 00:13:31.694 11:17:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 557370 00:13:31.952 11:17:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:31.952 11:17:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:31.952 11:17:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:31.952 11:17:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:31.952 11:17:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:31.952 11:17:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:31.952 11:17:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:31.952 11:17:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:33.853 11:17:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:33.853 00:13:33.853 real 0m23.540s 00:13:33.853 user 1m22.500s 00:13:33.853 sys 0m6.767s 00:13:33.853 11:17:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:33.853 11:17:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.853 ************************************ 00:13:33.853 END TEST nvmf_fio_target 00:13:33.853 ************************************ 00:13:33.853 11:17:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:33.853 11:17:59 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:33.853 11:17:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:33.853 11:17:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:33.853 11:17:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:34.110 ************************************ 00:13:34.110 START TEST nvmf_bdevio 00:13:34.110 ************************************ 00:13:34.110 11:17:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:34.110 * Looking for test storage... 00:13:34.110 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:34.110 11:18:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:34.110 11:18:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:13:34.110 11:18:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:34.110 11:18:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:34.110 11:18:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:13:34.111 11:18:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:36.010 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:36.010 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:36.010 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:36.010 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:36.010 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:36.268 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:36.268 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:36.268 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:36.268 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:36.268 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:36.268 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:36.268 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:36.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:36.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:13:36.268 00:13:36.268 --- 10.0.0.2 ping statistics --- 00:13:36.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.268 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:13:36.268 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:36.268 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:36.268 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:13:36.268 00:13:36.268 --- 10.0.0.1 ping statistics --- 00:13:36.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.268 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:13:36.268 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:36.268 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:13:36.268 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:36.268 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:36.268 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:36.268 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:36.268 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:36.268 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:36.268 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:36.268 11:18:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:36.268 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:36.268 11:18:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:36.268 11:18:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:36.268 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=562228 00:13:36.268 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 562228 00:13:36.268 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:36.268 11:18:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 562228 ']' 00:13:36.268 11:18:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.268 11:18:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:36.268 11:18:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.268 11:18:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:36.268 11:18:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:36.268 [2024-07-12 11:18:02.344559] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:13:36.268 [2024-07-12 11:18:02.344642] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:36.268 EAL: No free 2048 kB hugepages reported on node 1 00:13:36.526 [2024-07-12 11:18:02.435243] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:36.526 [2024-07-12 11:18:02.587582] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:36.526 [2024-07-12 11:18:02.587651] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:36.526 [2024-07-12 11:18:02.587678] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:36.526 [2024-07-12 11:18:02.587701] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:36.526 [2024-07-12 11:18:02.587729] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:36.526 [2024-07-12 11:18:02.587832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:36.526 [2024-07-12 11:18:02.587899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:36.526 [2024-07-12 11:18:02.587957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:36.526 [2024-07-12 11:18:02.587961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:36.785 11:18:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:36.785 11:18:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:13:36.785 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:36.785 11:18:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:36.785 11:18:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:36.785 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:36.785 11:18:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:36.785 11:18:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.785 11:18:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:36.785 [2024-07-12 11:18:02.747760] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:36.785 11:18:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.785 11:18:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:36.785 11:18:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.785 11:18:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:36.785 Malloc0 00:13:36.785 11:18:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.785 11:18:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:36.785 11:18:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.785 11:18:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:36.785 11:18:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.785 11:18:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:36.785 11:18:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.785 11:18:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:36.785 11:18:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.785 11:18:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:36.785 11:18:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.785 11:18:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:36.785 [2024-07-12 11:18:02.799819] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.785 11:18:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.785 11:18:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:13:36.785 11:18:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:36.785 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:13:36.785 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:13:36.785 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:36.785 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:36.785 { 00:13:36.785 "params": { 00:13:36.785 "name": "Nvme$subsystem", 00:13:36.785 "trtype": "$TEST_TRANSPORT", 00:13:36.785 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:36.785 "adrfam": "ipv4", 00:13:36.785 "trsvcid": "$NVMF_PORT", 00:13:36.785 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:36.785 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:36.785 "hdgst": ${hdgst:-false}, 00:13:36.785 "ddgst": ${ddgst:-false} 00:13:36.785 }, 00:13:36.785 "method": "bdev_nvme_attach_controller" 00:13:36.785 } 00:13:36.785 EOF 00:13:36.785 )") 00:13:36.785 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:13:36.785 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:13:36.785 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:13:36.785 11:18:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:36.785 "params": { 00:13:36.785 "name": "Nvme1", 00:13:36.785 "trtype": "tcp", 00:13:36.785 "traddr": "10.0.0.2", 00:13:36.785 "adrfam": "ipv4", 00:13:36.785 "trsvcid": "4420", 00:13:36.785 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:36.785 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:36.785 "hdgst": false, 00:13:36.785 "ddgst": false 00:13:36.785 }, 00:13:36.785 "method": "bdev_nvme_attach_controller" 00:13:36.785 }' 00:13:36.785 [2024-07-12 11:18:02.849253] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:13:36.785 [2024-07-12 11:18:02.849346] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid562258 ] 00:13:36.785 EAL: No free 2048 kB hugepages reported on node 1 00:13:36.785 [2024-07-12 11:18:02.915161] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:37.056 [2024-07-12 11:18:03.032338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:37.056 [2024-07-12 11:18:03.032390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:37.056 [2024-07-12 11:18:03.032394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.320 I/O targets: 00:13:37.320 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:37.320 00:13:37.320 00:13:37.320 CUnit - A unit testing framework for C - Version 2.1-3 00:13:37.320 http://cunit.sourceforge.net/ 00:13:37.320 00:13:37.320 00:13:37.320 Suite: bdevio tests on: Nvme1n1 00:13:37.320 Test: blockdev write read block ...passed 00:13:37.320 Test: blockdev write zeroes read block ...passed 00:13:37.320 Test: blockdev write zeroes read no split ...passed 00:13:37.577 Test: blockdev write zeroes read split ...passed 00:13:37.577 Test: blockdev write zeroes read split partial ...passed 00:13:37.577 Test: blockdev reset ...[2024-07-12 11:18:03.488240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:37.577 [2024-07-12 11:18:03.488348] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1128580 (9): Bad file descriptor 00:13:37.577 [2024-07-12 11:18:03.541991] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:37.577 passed 00:13:37.577 Test: blockdev write read 8 blocks ...passed 00:13:37.577 Test: blockdev write read size > 128k ...passed 00:13:37.577 Test: blockdev write read invalid size ...passed 00:13:37.577 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:37.577 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:37.577 Test: blockdev write read max offset ...passed 00:13:37.577 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:37.577 Test: blockdev writev readv 8 blocks ...passed 00:13:37.577 Test: blockdev writev readv 30 x 1block ...passed 00:13:37.835 Test: blockdev writev readv block ...passed 00:13:37.835 Test: blockdev writev readv size > 128k ...passed 00:13:37.835 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:37.835 Test: blockdev comparev and writev ...[2024-07-12 11:18:03.752864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:37.835 [2024-07-12 11:18:03.752906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:37.835 [2024-07-12 11:18:03.752931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:37.835 [2024-07-12 11:18:03.752948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:37.835 [2024-07-12 11:18:03.753275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:37.835 [2024-07-12 11:18:03.753300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:37.835 [2024-07-12 11:18:03.753322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:37.835 [2024-07-12 11:18:03.753345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:37.835 [2024-07-12 11:18:03.753674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:37.835 [2024-07-12 11:18:03.753698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:37.835 [2024-07-12 11:18:03.753719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:37.835 [2024-07-12 11:18:03.753735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:37.835 [2024-07-12 11:18:03.754051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:37.835 [2024-07-12 11:18:03.754076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:37.835 [2024-07-12 11:18:03.754097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:37.835 [2024-07-12 11:18:03.754114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:37.835 passed 00:13:37.835 Test: blockdev nvme passthru rw ...passed 00:13:37.835 Test: blockdev nvme passthru vendor specific ...[2024-07-12 11:18:03.836100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:37.835 [2024-07-12 11:18:03.836128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:37.835 [2024-07-12 11:18:03.836264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:37.835 [2024-07-12 11:18:03.836287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:37.835 [2024-07-12 11:18:03.836421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:37.835 [2024-07-12 11:18:03.836444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:37.835 [2024-07-12 11:18:03.836573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:37.835 [2024-07-12 11:18:03.836595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:37.835 passed 00:13:37.835 Test: blockdev nvme admin passthru ...passed 00:13:37.835 Test: blockdev copy ...passed 00:13:37.835 00:13:37.835 Run Summary: Type Total Ran Passed Failed Inactive 00:13:37.835 suites 1 1 n/a 0 0 00:13:37.835 tests 23 23 23 0 0 00:13:37.835 asserts 152 152 152 0 n/a 00:13:37.835 00:13:37.835 Elapsed time = 1.115 seconds 00:13:38.092 11:18:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:38.093 11:18:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.093 11:18:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:38.093 11:18:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.093 11:18:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:38.093 11:18:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:13:38.093 11:18:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:38.093 11:18:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:13:38.093 11:18:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:38.093 11:18:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:13:38.093 11:18:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:38.093 11:18:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:38.093 rmmod nvme_tcp 00:13:38.093 rmmod nvme_fabrics 00:13:38.093 rmmod nvme_keyring 00:13:38.093 11:18:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:38.093 11:18:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:13:38.093 11:18:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:13:38.093 11:18:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 562228 ']' 00:13:38.093 11:18:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 562228 00:13:38.093 11:18:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 562228 ']' 00:13:38.093 11:18:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 562228 00:13:38.093 11:18:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:13:38.093 11:18:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:38.093 11:18:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 562228 00:13:38.350 11:18:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:13:38.350 11:18:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:13:38.350 11:18:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 562228' 00:13:38.350 killing process with pid 562228 00:13:38.350 11:18:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 562228 00:13:38.350 11:18:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 562228 00:13:38.609 11:18:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:38.609 11:18:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:38.609 11:18:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:38.609 11:18:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:38.609 11:18:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:38.609 11:18:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:38.609 11:18:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:38.609 11:18:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.511 11:18:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:40.511 00:13:40.511 real 0m6.541s 00:13:40.511 user 0m10.711s 00:13:40.511 sys 0m2.196s 00:13:40.511 11:18:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:40.511 11:18:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:40.511 ************************************ 00:13:40.511 END TEST nvmf_bdevio 00:13:40.512 ************************************ 00:13:40.512 11:18:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:40.512 11:18:06 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:40.512 11:18:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:40.512 11:18:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:40.512 11:18:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:40.512 ************************************ 00:13:40.512 START TEST nvmf_auth_target 00:13:40.512 ************************************ 00:13:40.512 11:18:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:40.512 * Looking for test storage... 00:13:40.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:40.771 11:18:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:42.674 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:42.674 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:42.674 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:42.674 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:42.674 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:42.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:13:42.674 00:13:42.674 --- 10.0.0.2 ping statistics --- 00:13:42.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.674 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:42.674 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:42.674 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:13:42.674 00:13:42.674 --- 10.0.0.1 ping statistics --- 00:13:42.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.674 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:42.674 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:42.933 11:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:13:42.933 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:42.933 11:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:42.933 11:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.933 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=564908 00:13:42.933 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:13:42.933 11:18:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 564908 00:13:42.933 11:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 564908 ']' 00:13:42.933 11:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.933 11:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:42.933 11:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.933 11:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:42.933 11:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.191 11:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:43.191 11:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:43.191 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:43.191 11:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:43.191 11:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.191 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:43.191 11:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=564968 00:13:43.191 11:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:13:43.191 11:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:43.191 11:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:13:43.191 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:43.191 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:43.191 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:43.191 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:13:43.191 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=baa1ca282836c072eeefa6b0f3355d91ca96b1c36807af8d 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.BCm 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key baa1ca282836c072eeefa6b0f3355d91ca96b1c36807af8d 0 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 baa1ca282836c072eeefa6b0f3355d91ca96b1c36807af8d 0 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=baa1ca282836c072eeefa6b0f3355d91ca96b1c36807af8d 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.BCm 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.BCm 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.BCm 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b88eb465965aaf55cada2f0d1616f9a4b2e2b3e7746d776b52cb111ef28cc7d7 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.KEt 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b88eb465965aaf55cada2f0d1616f9a4b2e2b3e7746d776b52cb111ef28cc7d7 3 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b88eb465965aaf55cada2f0d1616f9a4b2e2b3e7746d776b52cb111ef28cc7d7 3 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b88eb465965aaf55cada2f0d1616f9a4b2e2b3e7746d776b52cb111ef28cc7d7 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.KEt 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.KEt 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.KEt 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a3fd598885097f6d167bf342738f607b 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Kwg 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a3fd598885097f6d167bf342738f607b 1 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a3fd598885097f6d167bf342738f607b 1 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a3fd598885097f6d167bf342738f607b 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Kwg 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Kwg 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.Kwg 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b47b90c3de7274c63a0335fcf242e9e260a4e94ae639f70b 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.9ik 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b47b90c3de7274c63a0335fcf242e9e260a4e94ae639f70b 2 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b47b90c3de7274c63a0335fcf242e9e260a4e94ae639f70b 2 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b47b90c3de7274c63a0335fcf242e9e260a4e94ae639f70b 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:43.192 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.9ik 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.9ik 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.9ik 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3f2cd5e5d3e8c98dc2c2a322521aded9672dff5f5fce48cc 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.0uJ 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3f2cd5e5d3e8c98dc2c2a322521aded9672dff5f5fce48cc 2 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3f2cd5e5d3e8c98dc2c2a322521aded9672dff5f5fce48cc 2 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3f2cd5e5d3e8c98dc2c2a322521aded9672dff5f5fce48cc 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.0uJ 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.0uJ 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.0uJ 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e3caefd8597a7aeb3cfa6fa1ce21678e 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.D6N 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e3caefd8597a7aeb3cfa6fa1ce21678e 1 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e3caefd8597a7aeb3cfa6fa1ce21678e 1 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e3caefd8597a7aeb3cfa6fa1ce21678e 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.D6N 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.D6N 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.D6N 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2e9f96b5ad4c78db2bce2093910f3e1ec55e1995ba0585183b981062754bb315 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:43.450 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Arm 00:13:43.451 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2e9f96b5ad4c78db2bce2093910f3e1ec55e1995ba0585183b981062754bb315 3 00:13:43.451 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2e9f96b5ad4c78db2bce2093910f3e1ec55e1995ba0585183b981062754bb315 3 00:13:43.451 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:43.451 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:43.451 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2e9f96b5ad4c78db2bce2093910f3e1ec55e1995ba0585183b981062754bb315 00:13:43.451 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:43.451 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:43.451 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Arm 00:13:43.451 11:18:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Arm 00:13:43.451 11:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.Arm 00:13:43.451 11:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:13:43.451 11:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 564908 00:13:43.451 11:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 564908 ']' 00:13:43.451 11:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.451 11:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:43.451 11:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.451 11:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:43.451 11:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.708 11:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:43.708 11:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:43.708 11:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 564968 /var/tmp/host.sock 00:13:43.708 11:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 564968 ']' 00:13:43.708 11:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:13:43.708 11:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:43.708 11:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:43.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:43.708 11:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:43.708 11:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.965 11:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:43.965 11:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:43.965 11:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:13:43.965 11:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.965 11:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.965 11:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.965 11:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:43.965 11:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.BCm 00:13:43.965 11:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.965 11:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.965 11:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.965 11:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.BCm 00:13:43.965 11:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.BCm 00:13:44.222 11:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.KEt ]] 00:13:44.222 11:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.KEt 00:13:44.222 11:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.222 11:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.222 11:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.222 11:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.KEt 00:13:44.222 11:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.KEt 00:13:44.478 11:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:44.478 11:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Kwg 00:13:44.478 11:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.478 11:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.478 11:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.478 11:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Kwg 00:13:44.478 11:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Kwg 00:13:44.735 11:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.9ik ]] 00:13:44.735 11:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.9ik 00:13:44.735 11:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.735 11:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.735 11:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.735 11:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.9ik 00:13:44.735 11:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.9ik 00:13:44.992 11:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:44.992 11:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.0uJ 00:13:44.992 11:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.992 11:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.992 11:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.992 11:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.0uJ 00:13:44.992 11:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.0uJ 00:13:45.248 11:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.D6N ]] 00:13:45.248 11:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.D6N 00:13:45.248 11:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.248 11:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.248 11:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.248 11:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.D6N 00:13:45.248 11:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.D6N 00:13:45.505 11:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:45.505 11:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Arm 00:13:45.505 11:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.505 11:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.505 11:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.505 11:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Arm 00:13:45.505 11:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Arm 00:13:45.761 11:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:13:45.761 11:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:13:45.761 11:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:45.761 11:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:45.761 11:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:45.761 11:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:46.018 11:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:13:46.018 11:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:46.018 11:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:46.018 11:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:46.018 11:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:46.018 11:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:46.018 11:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:46.018 11:18:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.018 11:18:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.018 11:18:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.018 11:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:46.018 11:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:46.274 00:13:46.274 11:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:46.274 11:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:46.274 11:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:46.532 11:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:46.532 11:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:46.532 11:18:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.532 11:18:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.532 11:18:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.532 11:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:46.532 { 00:13:46.532 "cntlid": 1, 00:13:46.532 "qid": 0, 00:13:46.532 "state": "enabled", 00:13:46.532 "thread": "nvmf_tgt_poll_group_000", 00:13:46.532 "listen_address": { 00:13:46.532 "trtype": "TCP", 00:13:46.532 "adrfam": "IPv4", 00:13:46.532 "traddr": "10.0.0.2", 00:13:46.532 "trsvcid": "4420" 00:13:46.532 }, 00:13:46.532 "peer_address": { 00:13:46.532 "trtype": "TCP", 00:13:46.532 "adrfam": "IPv4", 00:13:46.532 "traddr": "10.0.0.1", 00:13:46.532 "trsvcid": "45678" 00:13:46.532 }, 00:13:46.532 "auth": { 00:13:46.532 "state": "completed", 00:13:46.532 "digest": "sha256", 00:13:46.532 "dhgroup": "null" 00:13:46.532 } 00:13:46.532 } 00:13:46.532 ]' 00:13:46.532 11:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:46.793 11:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:46.793 11:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:46.793 11:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:46.793 11:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:46.793 11:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:46.793 11:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:46.793 11:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:47.080 11:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmFhMWNhMjgyODM2YzA3MmVlZWZhNmIwZjMzNTVkOTFjYTk2YjFjMzY4MDdhZjhkPIy24A==: --dhchap-ctrl-secret DHHC-1:03:Yjg4ZWI0NjU5NjVhYWY1NWNhZGEyZjBkMTYxNmY5YTRiMmUyYjNlNzc0NmQ3NzZiNTJjYjExMWVmMjhjYzdkN4xBKq4=: 00:13:48.015 11:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:48.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:48.015 11:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:48.015 11:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.015 11:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.015 11:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.015 11:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:48.015 11:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:48.015 11:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:48.015 11:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:13:48.015 11:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:48.015 11:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:48.015 11:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:48.015 11:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:48.015 11:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:48.015 11:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.015 11:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.015 11:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.015 11:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.015 11:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.015 11:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.580 00:13:48.580 11:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:48.580 11:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:48.580 11:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:48.580 11:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:48.580 11:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:48.580 11:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.580 11:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.580 11:18:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.580 11:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:48.580 { 00:13:48.580 "cntlid": 3, 00:13:48.580 "qid": 0, 00:13:48.580 "state": "enabled", 00:13:48.580 "thread": "nvmf_tgt_poll_group_000", 00:13:48.580 "listen_address": { 00:13:48.580 "trtype": "TCP", 00:13:48.580 "adrfam": "IPv4", 00:13:48.580 "traddr": "10.0.0.2", 00:13:48.580 "trsvcid": "4420" 00:13:48.580 }, 00:13:48.580 "peer_address": { 00:13:48.580 "trtype": "TCP", 00:13:48.580 "adrfam": "IPv4", 00:13:48.580 "traddr": "10.0.0.1", 00:13:48.580 "trsvcid": "45702" 00:13:48.580 }, 00:13:48.580 "auth": { 00:13:48.580 "state": "completed", 00:13:48.580 "digest": "sha256", 00:13:48.580 "dhgroup": "null" 00:13:48.580 } 00:13:48.580 } 00:13:48.580 ]' 00:13:48.580 11:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:48.837 11:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:48.837 11:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:48.837 11:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:48.837 11:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:48.837 11:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:48.837 11:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:48.837 11:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:49.095 11:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTNmZDU5ODg4NTA5N2Y2ZDE2N2JmMzQyNzM4ZjYwN2IsnuM6: --dhchap-ctrl-secret DHHC-1:02:YjQ3YjkwYzNkZTcyNzRjNjNhMDMzNWZjZjI0MmU5ZTI2MGE0ZTk0YWU2MzlmNzBisRFJmg==: 00:13:50.027 11:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:50.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:50.027 11:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:50.027 11:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.027 11:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.027 11:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.027 11:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:50.027 11:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:50.027 11:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:50.285 11:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:13:50.285 11:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:50.285 11:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:50.285 11:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:50.285 11:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:50.285 11:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:50.285 11:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.285 11:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.285 11:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.285 11:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.285 11:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.285 11:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.542 00:13:50.542 11:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:50.542 11:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:50.542 11:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:50.799 11:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:50.799 11:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:50.799 11:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.799 11:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.799 11:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.799 11:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:50.799 { 00:13:50.799 "cntlid": 5, 00:13:50.799 "qid": 0, 00:13:50.799 "state": "enabled", 00:13:50.799 "thread": "nvmf_tgt_poll_group_000", 00:13:50.799 "listen_address": { 00:13:50.799 "trtype": "TCP", 00:13:50.799 "adrfam": "IPv4", 00:13:50.799 "traddr": "10.0.0.2", 00:13:50.799 "trsvcid": "4420" 00:13:50.799 }, 00:13:50.799 "peer_address": { 00:13:50.799 "trtype": "TCP", 00:13:50.799 "adrfam": "IPv4", 00:13:50.799 "traddr": "10.0.0.1", 00:13:50.799 "trsvcid": "45726" 00:13:50.799 }, 00:13:50.799 "auth": { 00:13:50.799 "state": "completed", 00:13:50.799 "digest": "sha256", 00:13:50.799 "dhgroup": "null" 00:13:50.799 } 00:13:50.799 } 00:13:50.799 ]' 00:13:50.799 11:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:50.799 11:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:50.799 11:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:50.799 11:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:50.799 11:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:50.799 11:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:50.799 11:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:50.799 11:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:51.057 11:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2YyY2Q1ZTVkM2U4Yzk4ZGMyYzJhMzIyNTIxYWRlZDk2NzJkZmY1ZjVmY2U0OGNjaji5Jw==: --dhchap-ctrl-secret DHHC-1:01:ZTNjYWVmZDg1OTdhN2FlYjNjZmE2ZmExY2UyMTY3OGWGvBGn: 00:13:51.990 11:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:51.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:51.990 11:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:51.990 11:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.990 11:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.990 11:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.990 11:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:51.990 11:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:51.990 11:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:52.248 11:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:13:52.248 11:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:52.248 11:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:52.248 11:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:52.248 11:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:52.248 11:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:52.248 11:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:13:52.248 11:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.248 11:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.248 11:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.248 11:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:52.248 11:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:52.505 00:13:52.505 11:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:52.505 11:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:52.505 11:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:52.763 11:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:52.763 11:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:52.763 11:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.763 11:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.763 11:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.763 11:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:52.763 { 00:13:52.763 "cntlid": 7, 00:13:52.763 "qid": 0, 00:13:52.763 "state": "enabled", 00:13:52.763 "thread": "nvmf_tgt_poll_group_000", 00:13:52.763 "listen_address": { 00:13:52.763 "trtype": "TCP", 00:13:52.763 "adrfam": "IPv4", 00:13:52.763 "traddr": "10.0.0.2", 00:13:52.763 "trsvcid": "4420" 00:13:52.763 }, 00:13:52.763 "peer_address": { 00:13:52.763 "trtype": "TCP", 00:13:52.763 "adrfam": "IPv4", 00:13:52.763 "traddr": "10.0.0.1", 00:13:52.763 "trsvcid": "46318" 00:13:52.763 }, 00:13:52.763 "auth": { 00:13:52.763 "state": "completed", 00:13:52.763 "digest": "sha256", 00:13:52.763 "dhgroup": "null" 00:13:52.763 } 00:13:52.763 } 00:13:52.763 ]' 00:13:52.763 11:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:53.020 11:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:53.020 11:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:53.020 11:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:53.020 11:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:53.020 11:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:53.020 11:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:53.021 11:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:53.279 11:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmU5Zjk2YjVhZDRjNzhkYjJiY2UyMDkzOTEwZjNlMWVjNTVlMTk5NWJhMDU4NTE4M2I5ODEwNjI3NTRiYjMxNR7UyM4=: 00:13:54.212 11:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:54.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:54.212 11:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:54.212 11:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.212 11:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.212 11:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.212 11:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:54.213 11:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:54.213 11:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:54.213 11:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:54.213 11:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:13:54.213 11:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:54.213 11:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:54.213 11:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:54.213 11:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:54.213 11:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.213 11:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.213 11:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.213 11:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.213 11:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.213 11:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.213 11:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.777 00:13:54.777 11:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:54.777 11:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:54.777 11:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:54.777 11:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:54.777 11:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:54.777 11:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.777 11:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.034 11:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.034 11:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:55.034 { 00:13:55.034 "cntlid": 9, 00:13:55.034 "qid": 0, 00:13:55.034 "state": "enabled", 00:13:55.034 "thread": "nvmf_tgt_poll_group_000", 00:13:55.034 "listen_address": { 00:13:55.034 "trtype": "TCP", 00:13:55.034 "adrfam": "IPv4", 00:13:55.034 "traddr": "10.0.0.2", 00:13:55.034 "trsvcid": "4420" 00:13:55.034 }, 00:13:55.034 "peer_address": { 00:13:55.034 "trtype": "TCP", 00:13:55.034 "adrfam": "IPv4", 00:13:55.034 "traddr": "10.0.0.1", 00:13:55.034 "trsvcid": "46342" 00:13:55.034 }, 00:13:55.034 "auth": { 00:13:55.034 "state": "completed", 00:13:55.034 "digest": "sha256", 00:13:55.034 "dhgroup": "ffdhe2048" 00:13:55.034 } 00:13:55.034 } 00:13:55.034 ]' 00:13:55.034 11:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:55.034 11:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:55.034 11:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:55.034 11:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:55.034 11:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:55.034 11:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.034 11:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.034 11:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.292 11:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmFhMWNhMjgyODM2YzA3MmVlZWZhNmIwZjMzNTVkOTFjYTk2YjFjMzY4MDdhZjhkPIy24A==: --dhchap-ctrl-secret DHHC-1:03:Yjg4ZWI0NjU5NjVhYWY1NWNhZGEyZjBkMTYxNmY5YTRiMmUyYjNlNzc0NmQ3NzZiNTJjYjExMWVmMjhjYzdkN4xBKq4=: 00:13:56.225 11:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:56.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:56.225 11:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:56.225 11:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.225 11:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.225 11:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.225 11:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:56.225 11:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:56.225 11:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:56.504 11:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:13:56.504 11:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:56.504 11:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:56.504 11:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:56.504 11:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:56.504 11:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:56.504 11:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:56.504 11:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.504 11:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.504 11:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.504 11:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:56.504 11:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:56.762 00:13:56.762 11:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:56.762 11:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:56.762 11:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.020 11:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.020 11:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:57.020 11:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.020 11:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.020 11:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.020 11:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:57.020 { 00:13:57.020 "cntlid": 11, 00:13:57.020 "qid": 0, 00:13:57.020 "state": "enabled", 00:13:57.020 "thread": "nvmf_tgt_poll_group_000", 00:13:57.020 "listen_address": { 00:13:57.020 "trtype": "TCP", 00:13:57.020 "adrfam": "IPv4", 00:13:57.020 "traddr": "10.0.0.2", 00:13:57.020 "trsvcid": "4420" 00:13:57.020 }, 00:13:57.020 "peer_address": { 00:13:57.020 "trtype": "TCP", 00:13:57.020 "adrfam": "IPv4", 00:13:57.020 "traddr": "10.0.0.1", 00:13:57.020 "trsvcid": "46358" 00:13:57.020 }, 00:13:57.020 "auth": { 00:13:57.020 "state": "completed", 00:13:57.020 "digest": "sha256", 00:13:57.020 "dhgroup": "ffdhe2048" 00:13:57.020 } 00:13:57.020 } 00:13:57.020 ]' 00:13:57.020 11:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:57.020 11:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:57.020 11:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:57.020 11:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:57.020 11:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:57.020 11:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:57.020 11:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.020 11:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.277 11:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTNmZDU5ODg4NTA5N2Y2ZDE2N2JmMzQyNzM4ZjYwN2IsnuM6: --dhchap-ctrl-secret DHHC-1:02:YjQ3YjkwYzNkZTcyNzRjNjNhMDMzNWZjZjI0MmU5ZTI2MGE0ZTk0YWU2MzlmNzBisRFJmg==: 00:13:58.211 11:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:58.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:58.211 11:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:58.211 11:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.211 11:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.211 11:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.211 11:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:58.211 11:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:58.211 11:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:58.468 11:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:13:58.468 11:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:58.468 11:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:58.468 11:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:58.468 11:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:58.468 11:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:58.468 11:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:58.468 11:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.468 11:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.468 11:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.468 11:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:58.468 11:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:58.726 00:13:58.726 11:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:58.726 11:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:58.726 11:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:58.984 11:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:58.984 11:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:58.984 11:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.984 11:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.984 11:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.984 11:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:58.984 { 00:13:58.984 "cntlid": 13, 00:13:58.984 "qid": 0, 00:13:58.984 "state": "enabled", 00:13:58.984 "thread": "nvmf_tgt_poll_group_000", 00:13:58.984 "listen_address": { 00:13:58.984 "trtype": "TCP", 00:13:58.984 "adrfam": "IPv4", 00:13:58.984 "traddr": "10.0.0.2", 00:13:58.984 "trsvcid": "4420" 00:13:58.984 }, 00:13:58.984 "peer_address": { 00:13:58.984 "trtype": "TCP", 00:13:58.984 "adrfam": "IPv4", 00:13:58.984 "traddr": "10.0.0.1", 00:13:58.984 "trsvcid": "46384" 00:13:58.984 }, 00:13:58.984 "auth": { 00:13:58.984 "state": "completed", 00:13:58.984 "digest": "sha256", 00:13:58.984 "dhgroup": "ffdhe2048" 00:13:58.984 } 00:13:58.984 } 00:13:58.984 ]' 00:13:58.984 11:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:58.984 11:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:58.984 11:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:58.984 11:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:58.984 11:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:59.242 11:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:59.242 11:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.242 11:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:59.500 11:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2YyY2Q1ZTVkM2U4Yzk4ZGMyYzJhMzIyNTIxYWRlZDk2NzJkZmY1ZjVmY2U0OGNjaji5Jw==: --dhchap-ctrl-secret DHHC-1:01:ZTNjYWVmZDg1OTdhN2FlYjNjZmE2ZmExY2UyMTY3OGWGvBGn: 00:14:00.432 11:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:00.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:00.432 11:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:00.432 11:18:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.432 11:18:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.432 11:18:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.432 11:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:00.432 11:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:00.432 11:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:00.432 11:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:14:00.432 11:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:00.432 11:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:00.432 11:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:00.432 11:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:00.432 11:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:00.432 11:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:00.432 11:18:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.432 11:18:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.432 11:18:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.432 11:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:00.432 11:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:01.007 00:14:01.007 11:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:01.007 11:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:01.007 11:18:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.007 11:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.007 11:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:01.007 11:18:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.007 11:18:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.007 11:18:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.007 11:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:01.008 { 00:14:01.008 "cntlid": 15, 00:14:01.008 "qid": 0, 00:14:01.008 "state": "enabled", 00:14:01.008 "thread": "nvmf_tgt_poll_group_000", 00:14:01.008 "listen_address": { 00:14:01.008 "trtype": "TCP", 00:14:01.008 "adrfam": "IPv4", 00:14:01.008 "traddr": "10.0.0.2", 00:14:01.008 "trsvcid": "4420" 00:14:01.008 }, 00:14:01.008 "peer_address": { 00:14:01.008 "trtype": "TCP", 00:14:01.008 "adrfam": "IPv4", 00:14:01.008 "traddr": "10.0.0.1", 00:14:01.008 "trsvcid": "46414" 00:14:01.008 }, 00:14:01.008 "auth": { 00:14:01.008 "state": "completed", 00:14:01.008 "digest": "sha256", 00:14:01.008 "dhgroup": "ffdhe2048" 00:14:01.008 } 00:14:01.008 } 00:14:01.008 ]' 00:14:01.008 11:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:01.269 11:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:01.269 11:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:01.269 11:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:01.269 11:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:01.269 11:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:01.269 11:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:01.269 11:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:01.526 11:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmU5Zjk2YjVhZDRjNzhkYjJiY2UyMDkzOTEwZjNlMWVjNTVlMTk5NWJhMDU4NTE4M2I5ODEwNjI3NTRiYjMxNR7UyM4=: 00:14:02.456 11:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.456 11:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:02.456 11:18:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.456 11:18:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.456 11:18:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.456 11:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:02.456 11:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:02.457 11:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:02.457 11:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:02.457 11:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:14:02.457 11:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:02.457 11:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:02.457 11:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:02.457 11:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:02.457 11:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:02.457 11:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:02.457 11:18:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.457 11:18:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.714 11:18:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.714 11:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:02.714 11:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:02.970 00:14:02.970 11:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:02.970 11:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:02.971 11:18:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.227 11:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:03.227 11:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:03.227 11:18:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.228 11:18:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.228 11:18:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.228 11:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:03.228 { 00:14:03.228 "cntlid": 17, 00:14:03.228 "qid": 0, 00:14:03.228 "state": "enabled", 00:14:03.228 "thread": "nvmf_tgt_poll_group_000", 00:14:03.228 "listen_address": { 00:14:03.228 "trtype": "TCP", 00:14:03.228 "adrfam": "IPv4", 00:14:03.228 "traddr": "10.0.0.2", 00:14:03.228 "trsvcid": "4420" 00:14:03.228 }, 00:14:03.228 "peer_address": { 00:14:03.228 "trtype": "TCP", 00:14:03.228 "adrfam": "IPv4", 00:14:03.228 "traddr": "10.0.0.1", 00:14:03.228 "trsvcid": "54578" 00:14:03.228 }, 00:14:03.228 "auth": { 00:14:03.228 "state": "completed", 00:14:03.228 "digest": "sha256", 00:14:03.228 "dhgroup": "ffdhe3072" 00:14:03.228 } 00:14:03.228 } 00:14:03.228 ]' 00:14:03.228 11:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:03.228 11:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:03.228 11:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:03.228 11:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:03.228 11:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:03.228 11:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:03.228 11:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:03.228 11:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:03.484 11:18:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmFhMWNhMjgyODM2YzA3MmVlZWZhNmIwZjMzNTVkOTFjYTk2YjFjMzY4MDdhZjhkPIy24A==: --dhchap-ctrl-secret DHHC-1:03:Yjg4ZWI0NjU5NjVhYWY1NWNhZGEyZjBkMTYxNmY5YTRiMmUyYjNlNzc0NmQ3NzZiNTJjYjExMWVmMjhjYzdkN4xBKq4=: 00:14:04.416 11:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:04.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:04.416 11:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:04.416 11:18:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.416 11:18:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.416 11:18:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.416 11:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:04.416 11:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:04.416 11:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:04.674 11:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:14:04.674 11:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:04.674 11:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:04.674 11:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:04.674 11:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:04.674 11:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:04.674 11:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:04.674 11:18:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.674 11:18:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.674 11:18:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.674 11:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:04.674 11:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:04.932 00:14:04.932 11:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:04.932 11:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.932 11:18:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:05.189 11:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:05.189 11:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:05.189 11:18:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.189 11:18:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.189 11:18:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.189 11:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:05.189 { 00:14:05.189 "cntlid": 19, 00:14:05.189 "qid": 0, 00:14:05.189 "state": "enabled", 00:14:05.189 "thread": "nvmf_tgt_poll_group_000", 00:14:05.189 "listen_address": { 00:14:05.189 "trtype": "TCP", 00:14:05.189 "adrfam": "IPv4", 00:14:05.189 "traddr": "10.0.0.2", 00:14:05.189 "trsvcid": "4420" 00:14:05.189 }, 00:14:05.189 "peer_address": { 00:14:05.189 "trtype": "TCP", 00:14:05.189 "adrfam": "IPv4", 00:14:05.189 "traddr": "10.0.0.1", 00:14:05.189 "trsvcid": "54596" 00:14:05.189 }, 00:14:05.189 "auth": { 00:14:05.189 "state": "completed", 00:14:05.189 "digest": "sha256", 00:14:05.189 "dhgroup": "ffdhe3072" 00:14:05.189 } 00:14:05.189 } 00:14:05.189 ]' 00:14:05.189 11:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:05.189 11:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:05.189 11:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:05.446 11:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:05.446 11:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:05.446 11:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:05.446 11:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:05.446 11:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:05.704 11:18:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTNmZDU5ODg4NTA5N2Y2ZDE2N2JmMzQyNzM4ZjYwN2IsnuM6: --dhchap-ctrl-secret DHHC-1:02:YjQ3YjkwYzNkZTcyNzRjNjNhMDMzNWZjZjI0MmU5ZTI2MGE0ZTk0YWU2MzlmNzBisRFJmg==: 00:14:06.635 11:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:06.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:06.635 11:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:06.635 11:18:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.635 11:18:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.635 11:18:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.635 11:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:06.635 11:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:06.635 11:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:06.892 11:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:14:06.892 11:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:06.892 11:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:06.892 11:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:06.892 11:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:06.892 11:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:06.892 11:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.892 11:18:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.892 11:18:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.892 11:18:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.892 11:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.892 11:18:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.158 00:14:07.158 11:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:07.158 11:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:07.158 11:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:07.415 11:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:07.415 11:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:07.415 11:18:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.415 11:18:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.415 11:18:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.415 11:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:07.415 { 00:14:07.415 "cntlid": 21, 00:14:07.415 "qid": 0, 00:14:07.415 "state": "enabled", 00:14:07.415 "thread": "nvmf_tgt_poll_group_000", 00:14:07.415 "listen_address": { 00:14:07.415 "trtype": "TCP", 00:14:07.415 "adrfam": "IPv4", 00:14:07.415 "traddr": "10.0.0.2", 00:14:07.415 "trsvcid": "4420" 00:14:07.415 }, 00:14:07.415 "peer_address": { 00:14:07.415 "trtype": "TCP", 00:14:07.415 "adrfam": "IPv4", 00:14:07.415 "traddr": "10.0.0.1", 00:14:07.415 "trsvcid": "54630" 00:14:07.415 }, 00:14:07.415 "auth": { 00:14:07.415 "state": "completed", 00:14:07.415 "digest": "sha256", 00:14:07.415 "dhgroup": "ffdhe3072" 00:14:07.415 } 00:14:07.415 } 00:14:07.415 ]' 00:14:07.415 11:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:07.415 11:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:07.415 11:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:07.415 11:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:07.415 11:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:07.415 11:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:07.415 11:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:07.415 11:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:07.672 11:18:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2YyY2Q1ZTVkM2U4Yzk4ZGMyYzJhMzIyNTIxYWRlZDk2NzJkZmY1ZjVmY2U0OGNjaji5Jw==: --dhchap-ctrl-secret DHHC-1:01:ZTNjYWVmZDg1OTdhN2FlYjNjZmE2ZmExY2UyMTY3OGWGvBGn: 00:14:08.637 11:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:08.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:08.637 11:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:08.637 11:18:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.637 11:18:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.637 11:18:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.637 11:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:08.637 11:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:08.637 11:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:08.921 11:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:14:08.921 11:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:08.921 11:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:08.921 11:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:08.921 11:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:08.921 11:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:08.921 11:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:08.921 11:18:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.921 11:18:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.921 11:18:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.921 11:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:08.921 11:18:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:09.178 00:14:09.178 11:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:09.178 11:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:09.178 11:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:09.435 11:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:09.435 11:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:09.435 11:18:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.435 11:18:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.435 11:18:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.435 11:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:09.435 { 00:14:09.435 "cntlid": 23, 00:14:09.435 "qid": 0, 00:14:09.435 "state": "enabled", 00:14:09.435 "thread": "nvmf_tgt_poll_group_000", 00:14:09.435 "listen_address": { 00:14:09.435 "trtype": "TCP", 00:14:09.435 "adrfam": "IPv4", 00:14:09.435 "traddr": "10.0.0.2", 00:14:09.435 "trsvcid": "4420" 00:14:09.435 }, 00:14:09.435 "peer_address": { 00:14:09.435 "trtype": "TCP", 00:14:09.435 "adrfam": "IPv4", 00:14:09.436 "traddr": "10.0.0.1", 00:14:09.436 "trsvcid": "54660" 00:14:09.436 }, 00:14:09.436 "auth": { 00:14:09.436 "state": "completed", 00:14:09.436 "digest": "sha256", 00:14:09.436 "dhgroup": "ffdhe3072" 00:14:09.436 } 00:14:09.436 } 00:14:09.436 ]' 00:14:09.436 11:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:09.436 11:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:09.436 11:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:09.436 11:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:09.436 11:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:09.436 11:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:09.436 11:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:09.436 11:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:09.693 11:18:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmU5Zjk2YjVhZDRjNzhkYjJiY2UyMDkzOTEwZjNlMWVjNTVlMTk5NWJhMDU4NTE4M2I5ODEwNjI3NTRiYjMxNR7UyM4=: 00:14:10.624 11:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:10.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:10.624 11:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:10.624 11:18:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.624 11:18:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.624 11:18:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.624 11:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:10.624 11:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:10.624 11:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:10.624 11:18:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:11.189 11:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:14:11.189 11:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:11.189 11:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:11.189 11:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:11.189 11:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:11.189 11:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:11.189 11:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:11.189 11:18:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.189 11:18:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.189 11:18:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.189 11:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:11.189 11:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:11.447 00:14:11.447 11:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:11.447 11:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:11.447 11:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.705 11:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.705 11:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:11.705 11:18:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.705 11:18:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.705 11:18:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.705 11:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:11.705 { 00:14:11.705 "cntlid": 25, 00:14:11.705 "qid": 0, 00:14:11.705 "state": "enabled", 00:14:11.705 "thread": "nvmf_tgt_poll_group_000", 00:14:11.705 "listen_address": { 00:14:11.705 "trtype": "TCP", 00:14:11.705 "adrfam": "IPv4", 00:14:11.705 "traddr": "10.0.0.2", 00:14:11.705 "trsvcid": "4420" 00:14:11.705 }, 00:14:11.705 "peer_address": { 00:14:11.705 "trtype": "TCP", 00:14:11.705 "adrfam": "IPv4", 00:14:11.705 "traddr": "10.0.0.1", 00:14:11.705 "trsvcid": "54698" 00:14:11.705 }, 00:14:11.705 "auth": { 00:14:11.705 "state": "completed", 00:14:11.705 "digest": "sha256", 00:14:11.705 "dhgroup": "ffdhe4096" 00:14:11.705 } 00:14:11.705 } 00:14:11.705 ]' 00:14:11.705 11:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:11.705 11:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:11.705 11:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:11.705 11:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:11.705 11:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:11.705 11:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.705 11:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.705 11:18:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:12.269 11:18:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmFhMWNhMjgyODM2YzA3MmVlZWZhNmIwZjMzNTVkOTFjYTk2YjFjMzY4MDdhZjhkPIy24A==: --dhchap-ctrl-secret DHHC-1:03:Yjg4ZWI0NjU5NjVhYWY1NWNhZGEyZjBkMTYxNmY5YTRiMmUyYjNlNzc0NmQ3NzZiNTJjYjExMWVmMjhjYzdkN4xBKq4=: 00:14:13.224 11:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:13.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:13.224 11:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:13.224 11:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.224 11:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.224 11:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.224 11:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:13.224 11:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:13.224 11:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:13.224 11:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:14:13.224 11:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:13.224 11:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:13.224 11:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:13.224 11:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:13.224 11:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:13.224 11:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:13.224 11:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.224 11:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.224 11:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.224 11:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:13.224 11:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:13.790 00:14:13.790 11:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:13.790 11:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:13.790 11:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.048 11:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.048 11:18:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.048 11:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.048 11:18:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.048 11:18:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.048 11:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:14.048 { 00:14:14.048 "cntlid": 27, 00:14:14.048 "qid": 0, 00:14:14.048 "state": "enabled", 00:14:14.048 "thread": "nvmf_tgt_poll_group_000", 00:14:14.048 "listen_address": { 00:14:14.048 "trtype": "TCP", 00:14:14.048 "adrfam": "IPv4", 00:14:14.048 "traddr": "10.0.0.2", 00:14:14.048 "trsvcid": "4420" 00:14:14.048 }, 00:14:14.048 "peer_address": { 00:14:14.048 "trtype": "TCP", 00:14:14.048 "adrfam": "IPv4", 00:14:14.048 "traddr": "10.0.0.1", 00:14:14.048 "trsvcid": "58794" 00:14:14.048 }, 00:14:14.048 "auth": { 00:14:14.048 "state": "completed", 00:14:14.048 "digest": "sha256", 00:14:14.048 "dhgroup": "ffdhe4096" 00:14:14.048 } 00:14:14.048 } 00:14:14.048 ]' 00:14:14.048 11:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:14.048 11:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:14.048 11:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:14.048 11:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:14.048 11:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:14.048 11:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.048 11:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.048 11:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:14.305 11:18:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTNmZDU5ODg4NTA5N2Y2ZDE2N2JmMzQyNzM4ZjYwN2IsnuM6: --dhchap-ctrl-secret DHHC-1:02:YjQ3YjkwYzNkZTcyNzRjNjNhMDMzNWZjZjI0MmU5ZTI2MGE0ZTk0YWU2MzlmNzBisRFJmg==: 00:14:15.237 11:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:15.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:15.495 11:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:15.495 11:18:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.495 11:18:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.495 11:18:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.495 11:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:15.495 11:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:15.495 11:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:15.753 11:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:14:15.753 11:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:15.753 11:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:15.753 11:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:15.753 11:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:15.753 11:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.753 11:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:15.753 11:18:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.753 11:18:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.753 11:18:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.753 11:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:15.753 11:18:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:16.011 00:14:16.011 11:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:16.011 11:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:16.011 11:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.268 11:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.268 11:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.269 11:18:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.269 11:18:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.269 11:18:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.269 11:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:16.269 { 00:14:16.269 "cntlid": 29, 00:14:16.269 "qid": 0, 00:14:16.269 "state": "enabled", 00:14:16.269 "thread": "nvmf_tgt_poll_group_000", 00:14:16.269 "listen_address": { 00:14:16.269 "trtype": "TCP", 00:14:16.269 "adrfam": "IPv4", 00:14:16.269 "traddr": "10.0.0.2", 00:14:16.269 "trsvcid": "4420" 00:14:16.269 }, 00:14:16.269 "peer_address": { 00:14:16.269 "trtype": "TCP", 00:14:16.269 "adrfam": "IPv4", 00:14:16.269 "traddr": "10.0.0.1", 00:14:16.269 "trsvcid": "58816" 00:14:16.269 }, 00:14:16.269 "auth": { 00:14:16.269 "state": "completed", 00:14:16.269 "digest": "sha256", 00:14:16.269 "dhgroup": "ffdhe4096" 00:14:16.269 } 00:14:16.269 } 00:14:16.269 ]' 00:14:16.269 11:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:16.269 11:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:16.269 11:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:16.525 11:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:16.525 11:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:16.525 11:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:16.525 11:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.525 11:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:16.781 11:18:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2YyY2Q1ZTVkM2U4Yzk4ZGMyYzJhMzIyNTIxYWRlZDk2NzJkZmY1ZjVmY2U0OGNjaji5Jw==: --dhchap-ctrl-secret DHHC-1:01:ZTNjYWVmZDg1OTdhN2FlYjNjZmE2ZmExY2UyMTY3OGWGvBGn: 00:14:17.710 11:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.710 11:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:17.710 11:18:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.710 11:18:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.710 11:18:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.710 11:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:17.710 11:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:17.710 11:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:17.967 11:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:14:17.967 11:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:17.967 11:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:17.967 11:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:17.967 11:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:17.967 11:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.967 11:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:17.967 11:18:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.967 11:18:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.967 11:18:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.967 11:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:17.967 11:18:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:18.223 00:14:18.223 11:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:18.223 11:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:18.223 11:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.480 11:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.480 11:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.480 11:18:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.480 11:18:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.480 11:18:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.480 11:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:18.480 { 00:14:18.480 "cntlid": 31, 00:14:18.480 "qid": 0, 00:14:18.480 "state": "enabled", 00:14:18.480 "thread": "nvmf_tgt_poll_group_000", 00:14:18.480 "listen_address": { 00:14:18.480 "trtype": "TCP", 00:14:18.480 "adrfam": "IPv4", 00:14:18.480 "traddr": "10.0.0.2", 00:14:18.480 "trsvcid": "4420" 00:14:18.480 }, 00:14:18.480 "peer_address": { 00:14:18.480 "trtype": "TCP", 00:14:18.480 "adrfam": "IPv4", 00:14:18.480 "traddr": "10.0.0.1", 00:14:18.480 "trsvcid": "58842" 00:14:18.480 }, 00:14:18.480 "auth": { 00:14:18.480 "state": "completed", 00:14:18.480 "digest": "sha256", 00:14:18.480 "dhgroup": "ffdhe4096" 00:14:18.480 } 00:14:18.480 } 00:14:18.480 ]' 00:14:18.480 11:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:18.480 11:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:18.480 11:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:18.738 11:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:18.738 11:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:18.738 11:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.738 11:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.738 11:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.995 11:18:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmU5Zjk2YjVhZDRjNzhkYjJiY2UyMDkzOTEwZjNlMWVjNTVlMTk5NWJhMDU4NTE4M2I5ODEwNjI3NTRiYjMxNR7UyM4=: 00:14:19.927 11:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:19.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:19.927 11:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:19.927 11:18:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.927 11:18:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.927 11:18:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.927 11:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:19.927 11:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:19.927 11:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:19.927 11:18:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:20.185 11:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:14:20.185 11:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:20.185 11:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:20.185 11:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:20.185 11:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:20.185 11:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.185 11:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.185 11:18:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.185 11:18:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.185 11:18:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.185 11:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.185 11:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.749 00:14:20.749 11:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:20.749 11:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:20.750 11:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:21.007 11:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.007 11:18:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:21.007 11:18:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.007 11:18:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.007 11:18:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.007 11:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:21.007 { 00:14:21.007 "cntlid": 33, 00:14:21.007 "qid": 0, 00:14:21.007 "state": "enabled", 00:14:21.007 "thread": "nvmf_tgt_poll_group_000", 00:14:21.007 "listen_address": { 00:14:21.007 "trtype": "TCP", 00:14:21.007 "adrfam": "IPv4", 00:14:21.007 "traddr": "10.0.0.2", 00:14:21.007 "trsvcid": "4420" 00:14:21.007 }, 00:14:21.007 "peer_address": { 00:14:21.007 "trtype": "TCP", 00:14:21.007 "adrfam": "IPv4", 00:14:21.007 "traddr": "10.0.0.1", 00:14:21.007 "trsvcid": "58874" 00:14:21.007 }, 00:14:21.007 "auth": { 00:14:21.007 "state": "completed", 00:14:21.007 "digest": "sha256", 00:14:21.007 "dhgroup": "ffdhe6144" 00:14:21.007 } 00:14:21.007 } 00:14:21.007 ]' 00:14:21.007 11:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:21.007 11:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:21.008 11:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:21.008 11:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:21.008 11:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:21.008 11:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:21.008 11:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:21.008 11:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.574 11:18:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmFhMWNhMjgyODM2YzA3MmVlZWZhNmIwZjMzNTVkOTFjYTk2YjFjMzY4MDdhZjhkPIy24A==: --dhchap-ctrl-secret DHHC-1:03:Yjg4ZWI0NjU5NjVhYWY1NWNhZGEyZjBkMTYxNmY5YTRiMmUyYjNlNzc0NmQ3NzZiNTJjYjExMWVmMjhjYzdkN4xBKq4=: 00:14:22.138 11:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:22.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:22.138 11:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:22.138 11:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.139 11:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.395 11:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.395 11:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:22.395 11:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:22.395 11:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:22.652 11:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:14:22.652 11:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:22.652 11:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:22.653 11:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:22.653 11:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:22.653 11:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.653 11:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:22.653 11:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.653 11:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.653 11:18:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.653 11:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:22.653 11:18:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:23.218 00:14:23.218 11:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:23.218 11:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.218 11:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:23.218 11:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.218 11:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:23.218 11:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.218 11:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.218 11:18:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.218 11:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:23.218 { 00:14:23.218 "cntlid": 35, 00:14:23.218 "qid": 0, 00:14:23.218 "state": "enabled", 00:14:23.218 "thread": "nvmf_tgt_poll_group_000", 00:14:23.218 "listen_address": { 00:14:23.218 "trtype": "TCP", 00:14:23.218 "adrfam": "IPv4", 00:14:23.218 "traddr": "10.0.0.2", 00:14:23.218 "trsvcid": "4420" 00:14:23.218 }, 00:14:23.218 "peer_address": { 00:14:23.218 "trtype": "TCP", 00:14:23.218 "adrfam": "IPv4", 00:14:23.218 "traddr": "10.0.0.1", 00:14:23.218 "trsvcid": "35720" 00:14:23.218 }, 00:14:23.218 "auth": { 00:14:23.218 "state": "completed", 00:14:23.218 "digest": "sha256", 00:14:23.218 "dhgroup": "ffdhe6144" 00:14:23.218 } 00:14:23.218 } 00:14:23.218 ]' 00:14:23.218 11:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:23.475 11:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:23.475 11:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:23.475 11:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:23.475 11:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:23.475 11:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.475 11:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.475 11:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.732 11:18:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTNmZDU5ODg4NTA5N2Y2ZDE2N2JmMzQyNzM4ZjYwN2IsnuM6: --dhchap-ctrl-secret DHHC-1:02:YjQ3YjkwYzNkZTcyNzRjNjNhMDMzNWZjZjI0MmU5ZTI2MGE0ZTk0YWU2MzlmNzBisRFJmg==: 00:14:24.676 11:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:24.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:24.676 11:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:24.677 11:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.677 11:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.677 11:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.677 11:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:24.677 11:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:24.677 11:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:24.934 11:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:14:24.934 11:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:24.934 11:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:24.934 11:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:24.934 11:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:24.934 11:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:24.934 11:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:24.934 11:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.934 11:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.934 11:18:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.934 11:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:24.934 11:18:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.498 00:14:25.498 11:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:25.498 11:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:25.498 11:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:25.498 11:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:25.498 11:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:25.498 11:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.498 11:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.498 11:18:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.498 11:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:25.498 { 00:14:25.498 "cntlid": 37, 00:14:25.498 "qid": 0, 00:14:25.498 "state": "enabled", 00:14:25.498 "thread": "nvmf_tgt_poll_group_000", 00:14:25.498 "listen_address": { 00:14:25.498 "trtype": "TCP", 00:14:25.498 "adrfam": "IPv4", 00:14:25.498 "traddr": "10.0.0.2", 00:14:25.498 "trsvcid": "4420" 00:14:25.498 }, 00:14:25.498 "peer_address": { 00:14:25.498 "trtype": "TCP", 00:14:25.498 "adrfam": "IPv4", 00:14:25.498 "traddr": "10.0.0.1", 00:14:25.498 "trsvcid": "35748" 00:14:25.498 }, 00:14:25.498 "auth": { 00:14:25.498 "state": "completed", 00:14:25.498 "digest": "sha256", 00:14:25.498 "dhgroup": "ffdhe6144" 00:14:25.498 } 00:14:25.498 } 00:14:25.498 ]' 00:14:25.754 11:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:25.754 11:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:25.754 11:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:25.754 11:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:25.754 11:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:25.754 11:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:25.754 11:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:25.754 11:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.011 11:18:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2YyY2Q1ZTVkM2U4Yzk4ZGMyYzJhMzIyNTIxYWRlZDk2NzJkZmY1ZjVmY2U0OGNjaji5Jw==: --dhchap-ctrl-secret DHHC-1:01:ZTNjYWVmZDg1OTdhN2FlYjNjZmE2ZmExY2UyMTY3OGWGvBGn: 00:14:26.943 11:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:26.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:26.943 11:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:26.943 11:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.943 11:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.943 11:18:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.943 11:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:26.943 11:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:26.943 11:18:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:27.199 11:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:14:27.199 11:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:27.199 11:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:27.199 11:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:27.199 11:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:27.199 11:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.199 11:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:27.199 11:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.199 11:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.199 11:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.199 11:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:27.199 11:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:27.761 00:14:27.762 11:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:27.762 11:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:27.762 11:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:28.018 11:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.018 11:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:28.018 11:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.018 11:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.018 11:18:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.018 11:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:28.018 { 00:14:28.018 "cntlid": 39, 00:14:28.018 "qid": 0, 00:14:28.018 "state": "enabled", 00:14:28.018 "thread": "nvmf_tgt_poll_group_000", 00:14:28.018 "listen_address": { 00:14:28.018 "trtype": "TCP", 00:14:28.018 "adrfam": "IPv4", 00:14:28.018 "traddr": "10.0.0.2", 00:14:28.018 "trsvcid": "4420" 00:14:28.018 }, 00:14:28.018 "peer_address": { 00:14:28.018 "trtype": "TCP", 00:14:28.018 "adrfam": "IPv4", 00:14:28.018 "traddr": "10.0.0.1", 00:14:28.018 "trsvcid": "35766" 00:14:28.018 }, 00:14:28.018 "auth": { 00:14:28.018 "state": "completed", 00:14:28.018 "digest": "sha256", 00:14:28.018 "dhgroup": "ffdhe6144" 00:14:28.018 } 00:14:28.018 } 00:14:28.018 ]' 00:14:28.018 11:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:28.018 11:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:28.018 11:18:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:28.018 11:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:28.018 11:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:28.018 11:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:28.018 11:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.018 11:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:28.275 11:18:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmU5Zjk2YjVhZDRjNzhkYjJiY2UyMDkzOTEwZjNlMWVjNTVlMTk5NWJhMDU4NTE4M2I5ODEwNjI3NTRiYjMxNR7UyM4=: 00:14:29.207 11:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:29.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:29.207 11:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:29.207 11:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.207 11:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.207 11:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.207 11:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:29.207 11:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:29.207 11:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:29.207 11:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:29.464 11:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:14:29.464 11:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:29.464 11:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:29.464 11:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:29.464 11:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:29.464 11:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:29.464 11:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.464 11:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.464 11:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.464 11:18:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.464 11:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.464 11:18:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.451 00:14:30.451 11:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:30.451 11:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:30.451 11:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.709 11:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.709 11:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.709 11:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.709 11:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.709 11:18:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.709 11:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:30.709 { 00:14:30.709 "cntlid": 41, 00:14:30.709 "qid": 0, 00:14:30.709 "state": "enabled", 00:14:30.710 "thread": "nvmf_tgt_poll_group_000", 00:14:30.710 "listen_address": { 00:14:30.710 "trtype": "TCP", 00:14:30.710 "adrfam": "IPv4", 00:14:30.710 "traddr": "10.0.0.2", 00:14:30.710 "trsvcid": "4420" 00:14:30.710 }, 00:14:30.710 "peer_address": { 00:14:30.710 "trtype": "TCP", 00:14:30.710 "adrfam": "IPv4", 00:14:30.710 "traddr": "10.0.0.1", 00:14:30.710 "trsvcid": "35788" 00:14:30.710 }, 00:14:30.710 "auth": { 00:14:30.710 "state": "completed", 00:14:30.710 "digest": "sha256", 00:14:30.710 "dhgroup": "ffdhe8192" 00:14:30.710 } 00:14:30.710 } 00:14:30.710 ]' 00:14:30.710 11:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:30.710 11:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:30.710 11:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:30.710 11:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:30.710 11:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:30.710 11:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.710 11:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.710 11:18:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:30.967 11:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmFhMWNhMjgyODM2YzA3MmVlZWZhNmIwZjMzNTVkOTFjYTk2YjFjMzY4MDdhZjhkPIy24A==: --dhchap-ctrl-secret DHHC-1:03:Yjg4ZWI0NjU5NjVhYWY1NWNhZGEyZjBkMTYxNmY5YTRiMmUyYjNlNzc0NmQ3NzZiNTJjYjExMWVmMjhjYzdkN4xBKq4=: 00:14:31.900 11:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.900 11:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:31.900 11:18:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.900 11:18:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.900 11:18:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.900 11:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:31.900 11:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:31.900 11:18:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:32.158 11:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:14:32.158 11:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:32.158 11:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:32.158 11:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:32.158 11:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:32.158 11:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.158 11:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.158 11:18:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.158 11:18:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.158 11:18:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.158 11:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.158 11:18:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:33.090 00:14:33.090 11:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:33.090 11:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:33.090 11:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.347 11:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.347 11:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.347 11:18:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.347 11:18:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.347 11:18:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.347 11:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:33.347 { 00:14:33.347 "cntlid": 43, 00:14:33.347 "qid": 0, 00:14:33.347 "state": "enabled", 00:14:33.347 "thread": "nvmf_tgt_poll_group_000", 00:14:33.347 "listen_address": { 00:14:33.347 "trtype": "TCP", 00:14:33.347 "adrfam": "IPv4", 00:14:33.347 "traddr": "10.0.0.2", 00:14:33.347 "trsvcid": "4420" 00:14:33.347 }, 00:14:33.347 "peer_address": { 00:14:33.347 "trtype": "TCP", 00:14:33.347 "adrfam": "IPv4", 00:14:33.347 "traddr": "10.0.0.1", 00:14:33.347 "trsvcid": "35480" 00:14:33.347 }, 00:14:33.347 "auth": { 00:14:33.347 "state": "completed", 00:14:33.347 "digest": "sha256", 00:14:33.347 "dhgroup": "ffdhe8192" 00:14:33.347 } 00:14:33.347 } 00:14:33.347 ]' 00:14:33.347 11:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:33.347 11:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:33.347 11:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:33.347 11:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:33.347 11:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:33.348 11:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.348 11:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.348 11:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.605 11:18:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTNmZDU5ODg4NTA5N2Y2ZDE2N2JmMzQyNzM4ZjYwN2IsnuM6: --dhchap-ctrl-secret DHHC-1:02:YjQ3YjkwYzNkZTcyNzRjNjNhMDMzNWZjZjI0MmU5ZTI2MGE0ZTk0YWU2MzlmNzBisRFJmg==: 00:14:34.538 11:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.538 11:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:34.538 11:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.538 11:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.538 11:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.538 11:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:34.538 11:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:34.538 11:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:34.795 11:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:14:34.796 11:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:34.796 11:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:34.796 11:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:34.796 11:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:34.796 11:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.796 11:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.796 11:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.796 11:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.796 11:19:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.796 11:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.796 11:19:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:35.728 00:14:35.728 11:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:35.728 11:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:35.728 11:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.986 11:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.986 11:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:35.986 11:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.986 11:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.986 11:19:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.986 11:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:35.986 { 00:14:35.986 "cntlid": 45, 00:14:35.986 "qid": 0, 00:14:35.986 "state": "enabled", 00:14:35.986 "thread": "nvmf_tgt_poll_group_000", 00:14:35.986 "listen_address": { 00:14:35.986 "trtype": "TCP", 00:14:35.986 "adrfam": "IPv4", 00:14:35.986 "traddr": "10.0.0.2", 00:14:35.986 "trsvcid": "4420" 00:14:35.986 }, 00:14:35.986 "peer_address": { 00:14:35.986 "trtype": "TCP", 00:14:35.986 "adrfam": "IPv4", 00:14:35.986 "traddr": "10.0.0.1", 00:14:35.986 "trsvcid": "35510" 00:14:35.986 }, 00:14:35.986 "auth": { 00:14:35.986 "state": "completed", 00:14:35.986 "digest": "sha256", 00:14:35.986 "dhgroup": "ffdhe8192" 00:14:35.986 } 00:14:35.986 } 00:14:35.986 ]' 00:14:35.986 11:19:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:35.986 11:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:35.986 11:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:35.986 11:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:35.986 11:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:35.986 11:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.986 11:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.986 11:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:36.244 11:19:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2YyY2Q1ZTVkM2U4Yzk4ZGMyYzJhMzIyNTIxYWRlZDk2NzJkZmY1ZjVmY2U0OGNjaji5Jw==: --dhchap-ctrl-secret DHHC-1:01:ZTNjYWVmZDg1OTdhN2FlYjNjZmE2ZmExY2UyMTY3OGWGvBGn: 00:14:37.175 11:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:37.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:37.175 11:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:37.175 11:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.175 11:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.175 11:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.175 11:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:37.175 11:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:37.175 11:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:37.431 11:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:14:37.431 11:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:37.431 11:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:37.431 11:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:37.431 11:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:37.431 11:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.431 11:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:37.431 11:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.431 11:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.431 11:19:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.431 11:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:37.431 11:19:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:38.360 00:14:38.360 11:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:38.360 11:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:38.360 11:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:38.617 11:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:38.617 11:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:38.617 11:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.617 11:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.617 11:19:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.617 11:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:38.617 { 00:14:38.617 "cntlid": 47, 00:14:38.617 "qid": 0, 00:14:38.617 "state": "enabled", 00:14:38.617 "thread": "nvmf_tgt_poll_group_000", 00:14:38.617 "listen_address": { 00:14:38.617 "trtype": "TCP", 00:14:38.617 "adrfam": "IPv4", 00:14:38.617 "traddr": "10.0.0.2", 00:14:38.617 "trsvcid": "4420" 00:14:38.617 }, 00:14:38.617 "peer_address": { 00:14:38.617 "trtype": "TCP", 00:14:38.617 "adrfam": "IPv4", 00:14:38.617 "traddr": "10.0.0.1", 00:14:38.617 "trsvcid": "35540" 00:14:38.617 }, 00:14:38.617 "auth": { 00:14:38.617 "state": "completed", 00:14:38.617 "digest": "sha256", 00:14:38.617 "dhgroup": "ffdhe8192" 00:14:38.617 } 00:14:38.617 } 00:14:38.617 ]' 00:14:38.617 11:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:38.617 11:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:38.617 11:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:38.617 11:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:38.617 11:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:38.618 11:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:38.618 11:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.618 11:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.876 11:19:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmU5Zjk2YjVhZDRjNzhkYjJiY2UyMDkzOTEwZjNlMWVjNTVlMTk5NWJhMDU4NTE4M2I5ODEwNjI3NTRiYjMxNR7UyM4=: 00:14:39.811 11:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.811 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.811 11:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:39.811 11:19:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.811 11:19:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.811 11:19:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.811 11:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:39.811 11:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:39.811 11:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:39.811 11:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:39.811 11:19:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:40.068 11:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:14:40.068 11:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:40.068 11:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:40.068 11:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:40.068 11:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:40.068 11:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.068 11:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:40.068 11:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.068 11:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.068 11:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.068 11:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:40.068 11:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:40.632 00:14:40.632 11:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:40.632 11:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:40.632 11:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.632 11:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.632 11:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.632 11:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.632 11:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.632 11:19:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.632 11:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:40.632 { 00:14:40.632 "cntlid": 49, 00:14:40.632 "qid": 0, 00:14:40.632 "state": "enabled", 00:14:40.632 "thread": "nvmf_tgt_poll_group_000", 00:14:40.632 "listen_address": { 00:14:40.632 "trtype": "TCP", 00:14:40.632 "adrfam": "IPv4", 00:14:40.632 "traddr": "10.0.0.2", 00:14:40.632 "trsvcid": "4420" 00:14:40.632 }, 00:14:40.632 "peer_address": { 00:14:40.632 "trtype": "TCP", 00:14:40.632 "adrfam": "IPv4", 00:14:40.632 "traddr": "10.0.0.1", 00:14:40.632 "trsvcid": "35572" 00:14:40.632 }, 00:14:40.632 "auth": { 00:14:40.632 "state": "completed", 00:14:40.632 "digest": "sha384", 00:14:40.632 "dhgroup": "null" 00:14:40.632 } 00:14:40.632 } 00:14:40.632 ]' 00:14:40.632 11:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:40.889 11:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:40.889 11:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:40.889 11:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:40.889 11:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:40.889 11:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.889 11:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.889 11:19:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.146 11:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmFhMWNhMjgyODM2YzA3MmVlZWZhNmIwZjMzNTVkOTFjYTk2YjFjMzY4MDdhZjhkPIy24A==: --dhchap-ctrl-secret DHHC-1:03:Yjg4ZWI0NjU5NjVhYWY1NWNhZGEyZjBkMTYxNmY5YTRiMmUyYjNlNzc0NmQ3NzZiNTJjYjExMWVmMjhjYzdkN4xBKq4=: 00:14:42.076 11:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.076 11:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:42.076 11:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.076 11:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.076 11:19:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.076 11:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:42.076 11:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:42.076 11:19:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:42.333 11:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:14:42.333 11:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:42.333 11:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:42.333 11:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:42.333 11:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:42.333 11:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:42.333 11:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.333 11:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.333 11:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.333 11:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.334 11:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.334 11:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.590 00:14:42.591 11:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:42.591 11:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.591 11:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:42.848 11:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.848 11:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.848 11:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.848 11:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.848 11:19:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.848 11:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:42.848 { 00:14:42.848 "cntlid": 51, 00:14:42.848 "qid": 0, 00:14:42.848 "state": "enabled", 00:14:42.848 "thread": "nvmf_tgt_poll_group_000", 00:14:42.848 "listen_address": { 00:14:42.848 "trtype": "TCP", 00:14:42.848 "adrfam": "IPv4", 00:14:42.848 "traddr": "10.0.0.2", 00:14:42.848 "trsvcid": "4420" 00:14:42.848 }, 00:14:42.848 "peer_address": { 00:14:42.848 "trtype": "TCP", 00:14:42.848 "adrfam": "IPv4", 00:14:42.848 "traddr": "10.0.0.1", 00:14:42.848 "trsvcid": "44226" 00:14:42.848 }, 00:14:42.848 "auth": { 00:14:42.848 "state": "completed", 00:14:42.848 "digest": "sha384", 00:14:42.848 "dhgroup": "null" 00:14:42.848 } 00:14:42.848 } 00:14:42.848 ]' 00:14:42.848 11:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:42.848 11:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:42.848 11:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:42.848 11:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:42.848 11:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:42.848 11:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.848 11:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.848 11:19:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:43.105 11:19:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTNmZDU5ODg4NTA5N2Y2ZDE2N2JmMzQyNzM4ZjYwN2IsnuM6: --dhchap-ctrl-secret DHHC-1:02:YjQ3YjkwYzNkZTcyNzRjNjNhMDMzNWZjZjI0MmU5ZTI2MGE0ZTk0YWU2MzlmNzBisRFJmg==: 00:14:44.038 11:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:44.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:44.038 11:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:44.038 11:19:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.038 11:19:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.038 11:19:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.038 11:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:44.038 11:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:44.038 11:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:44.296 11:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:14:44.296 11:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:44.296 11:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:44.296 11:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:44.296 11:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:44.296 11:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:44.296 11:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:44.296 11:19:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.296 11:19:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.296 11:19:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.296 11:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:44.296 11:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:44.554 00:14:44.554 11:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:44.554 11:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:44.554 11:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.812 11:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.812 11:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.812 11:19:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.812 11:19:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.812 11:19:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.812 11:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:44.812 { 00:14:44.812 "cntlid": 53, 00:14:44.812 "qid": 0, 00:14:44.812 "state": "enabled", 00:14:44.812 "thread": "nvmf_tgt_poll_group_000", 00:14:44.812 "listen_address": { 00:14:44.812 "trtype": "TCP", 00:14:44.812 "adrfam": "IPv4", 00:14:44.812 "traddr": "10.0.0.2", 00:14:44.812 "trsvcid": "4420" 00:14:44.812 }, 00:14:44.812 "peer_address": { 00:14:44.812 "trtype": "TCP", 00:14:44.812 "adrfam": "IPv4", 00:14:44.812 "traddr": "10.0.0.1", 00:14:44.812 "trsvcid": "44252" 00:14:44.812 }, 00:14:44.812 "auth": { 00:14:44.812 "state": "completed", 00:14:44.812 "digest": "sha384", 00:14:44.812 "dhgroup": "null" 00:14:44.812 } 00:14:44.812 } 00:14:44.812 ]' 00:14:44.812 11:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:44.812 11:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:44.812 11:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:44.812 11:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:44.812 11:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:45.069 11:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.069 11:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.069 11:19:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.327 11:19:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2YyY2Q1ZTVkM2U4Yzk4ZGMyYzJhMzIyNTIxYWRlZDk2NzJkZmY1ZjVmY2U0OGNjaji5Jw==: --dhchap-ctrl-secret DHHC-1:01:ZTNjYWVmZDg1OTdhN2FlYjNjZmE2ZmExY2UyMTY3OGWGvBGn: 00:14:46.260 11:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.260 11:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:46.260 11:19:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.260 11:19:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.260 11:19:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.260 11:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:46.260 11:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:46.260 11:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:46.517 11:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:14:46.517 11:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:46.517 11:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:46.517 11:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:46.517 11:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:46.517 11:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.518 11:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:46.518 11:19:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.518 11:19:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.518 11:19:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.518 11:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:46.518 11:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:46.776 00:14:46.776 11:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:46.776 11:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:46.776 11:19:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.049 11:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.050 11:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.050 11:19:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.050 11:19:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.050 11:19:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.050 11:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:47.050 { 00:14:47.050 "cntlid": 55, 00:14:47.050 "qid": 0, 00:14:47.050 "state": "enabled", 00:14:47.050 "thread": "nvmf_tgt_poll_group_000", 00:14:47.050 "listen_address": { 00:14:47.050 "trtype": "TCP", 00:14:47.050 "adrfam": "IPv4", 00:14:47.050 "traddr": "10.0.0.2", 00:14:47.050 "trsvcid": "4420" 00:14:47.050 }, 00:14:47.050 "peer_address": { 00:14:47.050 "trtype": "TCP", 00:14:47.050 "adrfam": "IPv4", 00:14:47.050 "traddr": "10.0.0.1", 00:14:47.050 "trsvcid": "44280" 00:14:47.050 }, 00:14:47.050 "auth": { 00:14:47.050 "state": "completed", 00:14:47.050 "digest": "sha384", 00:14:47.050 "dhgroup": "null" 00:14:47.050 } 00:14:47.050 } 00:14:47.050 ]' 00:14:47.050 11:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:47.050 11:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:47.050 11:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:47.050 11:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:47.050 11:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:47.050 11:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.050 11:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.050 11:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.313 11:19:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmU5Zjk2YjVhZDRjNzhkYjJiY2UyMDkzOTEwZjNlMWVjNTVlMTk5NWJhMDU4NTE4M2I5ODEwNjI3NTRiYjMxNR7UyM4=: 00:14:48.244 11:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:48.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:48.244 11:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:48.244 11:19:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.244 11:19:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.244 11:19:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.244 11:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:48.244 11:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:48.244 11:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:48.244 11:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:48.502 11:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:14:48.502 11:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:48.502 11:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:48.502 11:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:48.502 11:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:48.502 11:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.502 11:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.502 11:19:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.502 11:19:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.502 11:19:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.502 11:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.502 11:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:49.067 00:14:49.067 11:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:49.067 11:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:49.067 11:19:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:49.067 11:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:49.067 11:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:49.067 11:19:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.067 11:19:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.067 11:19:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.067 11:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:49.067 { 00:14:49.067 "cntlid": 57, 00:14:49.067 "qid": 0, 00:14:49.067 "state": "enabled", 00:14:49.067 "thread": "nvmf_tgt_poll_group_000", 00:14:49.067 "listen_address": { 00:14:49.067 "trtype": "TCP", 00:14:49.067 "adrfam": "IPv4", 00:14:49.067 "traddr": "10.0.0.2", 00:14:49.067 "trsvcid": "4420" 00:14:49.067 }, 00:14:49.067 "peer_address": { 00:14:49.067 "trtype": "TCP", 00:14:49.067 "adrfam": "IPv4", 00:14:49.067 "traddr": "10.0.0.1", 00:14:49.067 "trsvcid": "44302" 00:14:49.067 }, 00:14:49.067 "auth": { 00:14:49.067 "state": "completed", 00:14:49.067 "digest": "sha384", 00:14:49.067 "dhgroup": "ffdhe2048" 00:14:49.067 } 00:14:49.067 } 00:14:49.067 ]' 00:14:49.067 11:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:49.325 11:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:49.325 11:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:49.325 11:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:49.325 11:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:49.325 11:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.325 11:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.325 11:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.583 11:19:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmFhMWNhMjgyODM2YzA3MmVlZWZhNmIwZjMzNTVkOTFjYTk2YjFjMzY4MDdhZjhkPIy24A==: --dhchap-ctrl-secret DHHC-1:03:Yjg4ZWI0NjU5NjVhYWY1NWNhZGEyZjBkMTYxNmY5YTRiMmUyYjNlNzc0NmQ3NzZiNTJjYjExMWVmMjhjYzdkN4xBKq4=: 00:14:50.517 11:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:50.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:50.517 11:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:50.517 11:19:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.517 11:19:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.517 11:19:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.517 11:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:50.517 11:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:50.517 11:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:50.775 11:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:14:50.775 11:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:50.775 11:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:50.775 11:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:50.775 11:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:50.775 11:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.775 11:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:50.775 11:19:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.775 11:19:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.775 11:19:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.775 11:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:50.775 11:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:51.033 00:14:51.033 11:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:51.033 11:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:51.033 11:19:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.290 11:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.290 11:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.290 11:19:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.290 11:19:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.290 11:19:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.290 11:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:51.290 { 00:14:51.290 "cntlid": 59, 00:14:51.290 "qid": 0, 00:14:51.290 "state": "enabled", 00:14:51.290 "thread": "nvmf_tgt_poll_group_000", 00:14:51.290 "listen_address": { 00:14:51.290 "trtype": "TCP", 00:14:51.290 "adrfam": "IPv4", 00:14:51.290 "traddr": "10.0.0.2", 00:14:51.290 "trsvcid": "4420" 00:14:51.290 }, 00:14:51.290 "peer_address": { 00:14:51.290 "trtype": "TCP", 00:14:51.290 "adrfam": "IPv4", 00:14:51.290 "traddr": "10.0.0.1", 00:14:51.290 "trsvcid": "44320" 00:14:51.290 }, 00:14:51.290 "auth": { 00:14:51.290 "state": "completed", 00:14:51.290 "digest": "sha384", 00:14:51.290 "dhgroup": "ffdhe2048" 00:14:51.290 } 00:14:51.290 } 00:14:51.290 ]' 00:14:51.290 11:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:51.290 11:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:51.290 11:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:51.290 11:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:51.290 11:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:51.290 11:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.290 11:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.290 11:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.577 11:19:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTNmZDU5ODg4NTA5N2Y2ZDE2N2JmMzQyNzM4ZjYwN2IsnuM6: --dhchap-ctrl-secret DHHC-1:02:YjQ3YjkwYzNkZTcyNzRjNjNhMDMzNWZjZjI0MmU5ZTI2MGE0ZTk0YWU2MzlmNzBisRFJmg==: 00:14:52.534 11:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.534 11:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:52.534 11:19:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.534 11:19:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.534 11:19:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.534 11:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:52.534 11:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:52.534 11:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:52.792 11:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:14:52.792 11:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:52.792 11:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:52.792 11:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:52.792 11:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:52.792 11:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.792 11:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:52.792 11:19:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.792 11:19:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.792 11:19:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.792 11:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:52.792 11:19:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:53.357 00:14:53.357 11:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:53.357 11:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:53.357 11:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.357 11:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.357 11:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:53.357 11:19:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.357 11:19:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.357 11:19:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.357 11:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:53.357 { 00:14:53.357 "cntlid": 61, 00:14:53.357 "qid": 0, 00:14:53.357 "state": "enabled", 00:14:53.357 "thread": "nvmf_tgt_poll_group_000", 00:14:53.357 "listen_address": { 00:14:53.357 "trtype": "TCP", 00:14:53.357 "adrfam": "IPv4", 00:14:53.357 "traddr": "10.0.0.2", 00:14:53.357 "trsvcid": "4420" 00:14:53.357 }, 00:14:53.357 "peer_address": { 00:14:53.357 "trtype": "TCP", 00:14:53.357 "adrfam": "IPv4", 00:14:53.357 "traddr": "10.0.0.1", 00:14:53.357 "trsvcid": "46088" 00:14:53.357 }, 00:14:53.357 "auth": { 00:14:53.357 "state": "completed", 00:14:53.357 "digest": "sha384", 00:14:53.357 "dhgroup": "ffdhe2048" 00:14:53.357 } 00:14:53.357 } 00:14:53.357 ]' 00:14:53.357 11:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:53.615 11:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:53.615 11:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:53.615 11:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:53.615 11:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:53.615 11:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.615 11:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.615 11:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.872 11:19:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2YyY2Q1ZTVkM2U4Yzk4ZGMyYzJhMzIyNTIxYWRlZDk2NzJkZmY1ZjVmY2U0OGNjaji5Jw==: --dhchap-ctrl-secret DHHC-1:01:ZTNjYWVmZDg1OTdhN2FlYjNjZmE2ZmExY2UyMTY3OGWGvBGn: 00:14:54.805 11:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.805 11:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:54.805 11:19:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.805 11:19:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.805 11:19:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.805 11:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:54.805 11:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:54.805 11:19:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:55.063 11:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:14:55.063 11:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:55.063 11:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:55.063 11:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:55.063 11:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:55.063 11:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.063 11:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:55.063 11:19:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.063 11:19:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.063 11:19:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.063 11:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:55.063 11:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:55.320 00:14:55.320 11:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:55.320 11:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:55.320 11:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.578 11:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.578 11:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.578 11:19:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.578 11:19:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.578 11:19:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.578 11:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:55.578 { 00:14:55.578 "cntlid": 63, 00:14:55.578 "qid": 0, 00:14:55.578 "state": "enabled", 00:14:55.578 "thread": "nvmf_tgt_poll_group_000", 00:14:55.578 "listen_address": { 00:14:55.578 "trtype": "TCP", 00:14:55.578 "adrfam": "IPv4", 00:14:55.578 "traddr": "10.0.0.2", 00:14:55.578 "trsvcid": "4420" 00:14:55.578 }, 00:14:55.578 "peer_address": { 00:14:55.578 "trtype": "TCP", 00:14:55.578 "adrfam": "IPv4", 00:14:55.578 "traddr": "10.0.0.1", 00:14:55.578 "trsvcid": "46112" 00:14:55.578 }, 00:14:55.578 "auth": { 00:14:55.578 "state": "completed", 00:14:55.578 "digest": "sha384", 00:14:55.578 "dhgroup": "ffdhe2048" 00:14:55.578 } 00:14:55.578 } 00:14:55.578 ]' 00:14:55.578 11:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:55.578 11:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:55.578 11:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:55.578 11:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:55.578 11:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:55.835 11:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.835 11:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.835 11:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.093 11:19:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmU5Zjk2YjVhZDRjNzhkYjJiY2UyMDkzOTEwZjNlMWVjNTVlMTk5NWJhMDU4NTE4M2I5ODEwNjI3NTRiYjMxNR7UyM4=: 00:14:57.029 11:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.029 11:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:57.029 11:19:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.029 11:19:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.029 11:19:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.029 11:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:57.029 11:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:57.029 11:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:57.029 11:19:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:57.286 11:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:14:57.286 11:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:57.286 11:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:57.286 11:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:57.286 11:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:57.286 11:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.286 11:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.286 11:19:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.286 11:19:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.286 11:19:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.286 11:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.286 11:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.543 00:14:57.543 11:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:57.543 11:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:57.543 11:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.800 11:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.800 11:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.800 11:19:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.800 11:19:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.800 11:19:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.800 11:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:57.800 { 00:14:57.800 "cntlid": 65, 00:14:57.800 "qid": 0, 00:14:57.800 "state": "enabled", 00:14:57.800 "thread": "nvmf_tgt_poll_group_000", 00:14:57.800 "listen_address": { 00:14:57.800 "trtype": "TCP", 00:14:57.800 "adrfam": "IPv4", 00:14:57.800 "traddr": "10.0.0.2", 00:14:57.800 "trsvcid": "4420" 00:14:57.800 }, 00:14:57.800 "peer_address": { 00:14:57.800 "trtype": "TCP", 00:14:57.800 "adrfam": "IPv4", 00:14:57.800 "traddr": "10.0.0.1", 00:14:57.800 "trsvcid": "46124" 00:14:57.800 }, 00:14:57.800 "auth": { 00:14:57.800 "state": "completed", 00:14:57.800 "digest": "sha384", 00:14:57.800 "dhgroup": "ffdhe3072" 00:14:57.800 } 00:14:57.800 } 00:14:57.800 ]' 00:14:57.800 11:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:57.800 11:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:57.800 11:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:58.057 11:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:58.057 11:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:58.057 11:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.057 11:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.057 11:19:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.314 11:19:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmFhMWNhMjgyODM2YzA3MmVlZWZhNmIwZjMzNTVkOTFjYTk2YjFjMzY4MDdhZjhkPIy24A==: --dhchap-ctrl-secret DHHC-1:03:Yjg4ZWI0NjU5NjVhYWY1NWNhZGEyZjBkMTYxNmY5YTRiMmUyYjNlNzc0NmQ3NzZiNTJjYjExMWVmMjhjYzdkN4xBKq4=: 00:14:59.247 11:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.247 11:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:59.247 11:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.247 11:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.247 11:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.247 11:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:59.247 11:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:59.247 11:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:59.505 11:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:14:59.505 11:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:59.505 11:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:59.505 11:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:59.505 11:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:59.505 11:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.505 11:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.505 11:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.505 11:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.505 11:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.505 11:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.505 11:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.763 00:14:59.763 11:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:59.763 11:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:59.763 11:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.021 11:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.021 11:19:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.021 11:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.021 11:19:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.021 11:19:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.021 11:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:00.021 { 00:15:00.021 "cntlid": 67, 00:15:00.021 "qid": 0, 00:15:00.021 "state": "enabled", 00:15:00.021 "thread": "nvmf_tgt_poll_group_000", 00:15:00.021 "listen_address": { 00:15:00.021 "trtype": "TCP", 00:15:00.021 "adrfam": "IPv4", 00:15:00.021 "traddr": "10.0.0.2", 00:15:00.021 "trsvcid": "4420" 00:15:00.021 }, 00:15:00.021 "peer_address": { 00:15:00.021 "trtype": "TCP", 00:15:00.021 "adrfam": "IPv4", 00:15:00.021 "traddr": "10.0.0.1", 00:15:00.021 "trsvcid": "46152" 00:15:00.021 }, 00:15:00.021 "auth": { 00:15:00.021 "state": "completed", 00:15:00.021 "digest": "sha384", 00:15:00.021 "dhgroup": "ffdhe3072" 00:15:00.021 } 00:15:00.021 } 00:15:00.021 ]' 00:15:00.021 11:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:00.021 11:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:00.021 11:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:00.021 11:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:00.021 11:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:00.021 11:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.021 11:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.021 11:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.279 11:19:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTNmZDU5ODg4NTA5N2Y2ZDE2N2JmMzQyNzM4ZjYwN2IsnuM6: --dhchap-ctrl-secret DHHC-1:02:YjQ3YjkwYzNkZTcyNzRjNjNhMDMzNWZjZjI0MmU5ZTI2MGE0ZTk0YWU2MzlmNzBisRFJmg==: 00:15:01.214 11:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.214 11:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:01.214 11:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.214 11:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.214 11:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.214 11:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:01.214 11:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:01.214 11:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:01.472 11:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:15:01.472 11:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:01.472 11:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:01.472 11:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:01.472 11:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:01.472 11:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.472 11:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.472 11:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.472 11:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.472 11:19:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.472 11:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.472 11:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.730 00:15:01.730 11:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:01.730 11:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:01.730 11:19:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.987 11:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.987 11:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.987 11:19:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.987 11:19:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.245 11:19:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.245 11:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:02.245 { 00:15:02.245 "cntlid": 69, 00:15:02.245 "qid": 0, 00:15:02.245 "state": "enabled", 00:15:02.245 "thread": "nvmf_tgt_poll_group_000", 00:15:02.245 "listen_address": { 00:15:02.245 "trtype": "TCP", 00:15:02.245 "adrfam": "IPv4", 00:15:02.245 "traddr": "10.0.0.2", 00:15:02.245 "trsvcid": "4420" 00:15:02.245 }, 00:15:02.245 "peer_address": { 00:15:02.245 "trtype": "TCP", 00:15:02.245 "adrfam": "IPv4", 00:15:02.245 "traddr": "10.0.0.1", 00:15:02.245 "trsvcid": "56014" 00:15:02.245 }, 00:15:02.245 "auth": { 00:15:02.245 "state": "completed", 00:15:02.245 "digest": "sha384", 00:15:02.245 "dhgroup": "ffdhe3072" 00:15:02.245 } 00:15:02.245 } 00:15:02.245 ]' 00:15:02.245 11:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:02.245 11:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:02.245 11:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:02.245 11:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:02.245 11:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:02.245 11:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.245 11:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.245 11:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.503 11:19:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2YyY2Q1ZTVkM2U4Yzk4ZGMyYzJhMzIyNTIxYWRlZDk2NzJkZmY1ZjVmY2U0OGNjaji5Jw==: --dhchap-ctrl-secret DHHC-1:01:ZTNjYWVmZDg1OTdhN2FlYjNjZmE2ZmExY2UyMTY3OGWGvBGn: 00:15:03.434 11:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.434 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.434 11:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:03.434 11:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.434 11:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.434 11:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.434 11:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:03.434 11:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:03.434 11:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:03.691 11:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:15:03.691 11:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:03.691 11:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:03.691 11:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:03.691 11:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:03.691 11:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.691 11:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:03.691 11:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.691 11:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.691 11:19:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.691 11:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:03.691 11:19:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:03.948 00:15:03.949 11:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:03.949 11:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:03.949 11:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.206 11:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.206 11:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.206 11:19:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.206 11:19:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.206 11:19:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.206 11:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:04.206 { 00:15:04.206 "cntlid": 71, 00:15:04.206 "qid": 0, 00:15:04.206 "state": "enabled", 00:15:04.206 "thread": "nvmf_tgt_poll_group_000", 00:15:04.206 "listen_address": { 00:15:04.206 "trtype": "TCP", 00:15:04.206 "adrfam": "IPv4", 00:15:04.206 "traddr": "10.0.0.2", 00:15:04.206 "trsvcid": "4420" 00:15:04.206 }, 00:15:04.206 "peer_address": { 00:15:04.206 "trtype": "TCP", 00:15:04.206 "adrfam": "IPv4", 00:15:04.206 "traddr": "10.0.0.1", 00:15:04.206 "trsvcid": "56044" 00:15:04.206 }, 00:15:04.206 "auth": { 00:15:04.206 "state": "completed", 00:15:04.206 "digest": "sha384", 00:15:04.206 "dhgroup": "ffdhe3072" 00:15:04.206 } 00:15:04.206 } 00:15:04.206 ]' 00:15:04.206 11:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:04.206 11:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:04.206 11:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:04.463 11:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:04.463 11:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:04.463 11:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.463 11:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.463 11:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.720 11:19:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmU5Zjk2YjVhZDRjNzhkYjJiY2UyMDkzOTEwZjNlMWVjNTVlMTk5NWJhMDU4NTE4M2I5ODEwNjI3NTRiYjMxNR7UyM4=: 00:15:05.649 11:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.650 11:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:05.650 11:19:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.650 11:19:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.650 11:19:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.650 11:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:05.650 11:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:05.650 11:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:05.650 11:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:05.906 11:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:15:05.906 11:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:05.906 11:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:05.906 11:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:05.906 11:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:05.906 11:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.906 11:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:05.906 11:19:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.906 11:19:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.906 11:19:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.906 11:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:05.906 11:19:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.162 00:15:06.162 11:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:06.162 11:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:06.162 11:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.419 11:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.419 11:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.419 11:19:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.419 11:19:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.419 11:19:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.419 11:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:06.419 { 00:15:06.419 "cntlid": 73, 00:15:06.419 "qid": 0, 00:15:06.419 "state": "enabled", 00:15:06.419 "thread": "nvmf_tgt_poll_group_000", 00:15:06.419 "listen_address": { 00:15:06.419 "trtype": "TCP", 00:15:06.419 "adrfam": "IPv4", 00:15:06.419 "traddr": "10.0.0.2", 00:15:06.419 "trsvcid": "4420" 00:15:06.419 }, 00:15:06.419 "peer_address": { 00:15:06.419 "trtype": "TCP", 00:15:06.419 "adrfam": "IPv4", 00:15:06.419 "traddr": "10.0.0.1", 00:15:06.419 "trsvcid": "56072" 00:15:06.419 }, 00:15:06.419 "auth": { 00:15:06.419 "state": "completed", 00:15:06.419 "digest": "sha384", 00:15:06.419 "dhgroup": "ffdhe4096" 00:15:06.419 } 00:15:06.419 } 00:15:06.419 ]' 00:15:06.419 11:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:06.419 11:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:06.419 11:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:06.676 11:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:06.676 11:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:06.676 11:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.676 11:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.676 11:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.932 11:19:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmFhMWNhMjgyODM2YzA3MmVlZWZhNmIwZjMzNTVkOTFjYTk2YjFjMzY4MDdhZjhkPIy24A==: --dhchap-ctrl-secret DHHC-1:03:Yjg4ZWI0NjU5NjVhYWY1NWNhZGEyZjBkMTYxNmY5YTRiMmUyYjNlNzc0NmQ3NzZiNTJjYjExMWVmMjhjYzdkN4xBKq4=: 00:15:07.862 11:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.862 11:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:07.862 11:19:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.862 11:19:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.862 11:19:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.862 11:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:07.862 11:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:07.862 11:19:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:08.119 11:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:15:08.119 11:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:08.119 11:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:08.119 11:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:08.119 11:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:08.119 11:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.119 11:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.119 11:19:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.119 11:19:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.119 11:19:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.119 11:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.119 11:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.376 00:15:08.376 11:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:08.376 11:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:08.376 11:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.634 11:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.634 11:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.634 11:19:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.634 11:19:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.634 11:19:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.634 11:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:08.634 { 00:15:08.634 "cntlid": 75, 00:15:08.634 "qid": 0, 00:15:08.634 "state": "enabled", 00:15:08.634 "thread": "nvmf_tgt_poll_group_000", 00:15:08.634 "listen_address": { 00:15:08.634 "trtype": "TCP", 00:15:08.634 "adrfam": "IPv4", 00:15:08.634 "traddr": "10.0.0.2", 00:15:08.634 "trsvcid": "4420" 00:15:08.634 }, 00:15:08.634 "peer_address": { 00:15:08.634 "trtype": "TCP", 00:15:08.634 "adrfam": "IPv4", 00:15:08.634 "traddr": "10.0.0.1", 00:15:08.634 "trsvcid": "56102" 00:15:08.634 }, 00:15:08.634 "auth": { 00:15:08.634 "state": "completed", 00:15:08.634 "digest": "sha384", 00:15:08.634 "dhgroup": "ffdhe4096" 00:15:08.634 } 00:15:08.634 } 00:15:08.634 ]' 00:15:08.634 11:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:08.891 11:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:08.891 11:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:08.891 11:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:08.891 11:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:08.891 11:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.891 11:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.891 11:19:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.148 11:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTNmZDU5ODg4NTA5N2Y2ZDE2N2JmMzQyNzM4ZjYwN2IsnuM6: --dhchap-ctrl-secret DHHC-1:02:YjQ3YjkwYzNkZTcyNzRjNjNhMDMzNWZjZjI0MmU5ZTI2MGE0ZTk0YWU2MzlmNzBisRFJmg==: 00:15:10.079 11:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.079 11:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:10.079 11:19:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.079 11:19:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.079 11:19:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.079 11:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:10.079 11:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:10.079 11:19:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:10.336 11:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:15:10.336 11:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:10.336 11:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:10.336 11:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:10.336 11:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:10.336 11:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.336 11:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:10.336 11:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.336 11:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.336 11:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.336 11:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:10.336 11:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:10.593 00:15:10.593 11:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:10.593 11:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:10.593 11:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.849 11:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.849 11:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.849 11:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.849 11:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.849 11:19:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.849 11:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:10.849 { 00:15:10.849 "cntlid": 77, 00:15:10.849 "qid": 0, 00:15:10.849 "state": "enabled", 00:15:10.849 "thread": "nvmf_tgt_poll_group_000", 00:15:10.849 "listen_address": { 00:15:10.849 "trtype": "TCP", 00:15:10.849 "adrfam": "IPv4", 00:15:10.849 "traddr": "10.0.0.2", 00:15:10.849 "trsvcid": "4420" 00:15:10.849 }, 00:15:10.849 "peer_address": { 00:15:10.849 "trtype": "TCP", 00:15:10.849 "adrfam": "IPv4", 00:15:10.849 "traddr": "10.0.0.1", 00:15:10.849 "trsvcid": "56130" 00:15:10.849 }, 00:15:10.849 "auth": { 00:15:10.849 "state": "completed", 00:15:10.849 "digest": "sha384", 00:15:10.849 "dhgroup": "ffdhe4096" 00:15:10.849 } 00:15:10.849 } 00:15:10.849 ]' 00:15:10.849 11:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:10.849 11:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:10.849 11:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:10.849 11:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:10.849 11:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:10.849 11:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.850 11:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.850 11:19:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.107 11:19:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2YyY2Q1ZTVkM2U4Yzk4ZGMyYzJhMzIyNTIxYWRlZDk2NzJkZmY1ZjVmY2U0OGNjaji5Jw==: --dhchap-ctrl-secret DHHC-1:01:ZTNjYWVmZDg1OTdhN2FlYjNjZmE2ZmExY2UyMTY3OGWGvBGn: 00:15:12.037 11:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.037 11:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:12.037 11:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.037 11:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.037 11:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.037 11:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:12.037 11:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:12.037 11:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:12.294 11:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:15:12.294 11:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:12.294 11:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:12.295 11:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:12.295 11:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:12.295 11:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.295 11:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:12.295 11:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.295 11:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.295 11:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.295 11:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:12.295 11:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:12.913 00:15:12.913 11:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:12.913 11:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.913 11:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:12.913 11:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.913 11:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.913 11:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.913 11:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.913 11:19:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.913 11:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:12.913 { 00:15:12.913 "cntlid": 79, 00:15:12.913 "qid": 0, 00:15:12.913 "state": "enabled", 00:15:12.913 "thread": "nvmf_tgt_poll_group_000", 00:15:12.913 "listen_address": { 00:15:12.913 "trtype": "TCP", 00:15:12.913 "adrfam": "IPv4", 00:15:12.913 "traddr": "10.0.0.2", 00:15:12.913 "trsvcid": "4420" 00:15:12.913 }, 00:15:12.913 "peer_address": { 00:15:12.913 "trtype": "TCP", 00:15:12.913 "adrfam": "IPv4", 00:15:12.913 "traddr": "10.0.0.1", 00:15:12.913 "trsvcid": "34984" 00:15:12.913 }, 00:15:12.913 "auth": { 00:15:12.913 "state": "completed", 00:15:12.913 "digest": "sha384", 00:15:12.913 "dhgroup": "ffdhe4096" 00:15:12.913 } 00:15:12.913 } 00:15:12.913 ]' 00:15:12.913 11:19:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:12.913 11:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:12.913 11:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:13.194 11:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:13.194 11:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:13.194 11:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.194 11:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.194 11:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.452 11:19:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmU5Zjk2YjVhZDRjNzhkYjJiY2UyMDkzOTEwZjNlMWVjNTVlMTk5NWJhMDU4NTE4M2I5ODEwNjI3NTRiYjMxNR7UyM4=: 00:15:14.384 11:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.385 11:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:14.385 11:19:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.385 11:19:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.385 11:19:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.385 11:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:14.385 11:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:14.385 11:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:14.385 11:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:14.385 11:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:15:14.385 11:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:14.385 11:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:14.385 11:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:14.385 11:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:14.385 11:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.385 11:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.385 11:19:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.385 11:19:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.385 11:19:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.385 11:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.385 11:19:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.949 00:15:14.949 11:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:14.949 11:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:14.949 11:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.206 11:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.206 11:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.206 11:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.206 11:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.206 11:19:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.206 11:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:15.206 { 00:15:15.206 "cntlid": 81, 00:15:15.206 "qid": 0, 00:15:15.206 "state": "enabled", 00:15:15.206 "thread": "nvmf_tgt_poll_group_000", 00:15:15.206 "listen_address": { 00:15:15.206 "trtype": "TCP", 00:15:15.206 "adrfam": "IPv4", 00:15:15.206 "traddr": "10.0.0.2", 00:15:15.206 "trsvcid": "4420" 00:15:15.206 }, 00:15:15.206 "peer_address": { 00:15:15.206 "trtype": "TCP", 00:15:15.206 "adrfam": "IPv4", 00:15:15.206 "traddr": "10.0.0.1", 00:15:15.206 "trsvcid": "35016" 00:15:15.206 }, 00:15:15.206 "auth": { 00:15:15.206 "state": "completed", 00:15:15.206 "digest": "sha384", 00:15:15.206 "dhgroup": "ffdhe6144" 00:15:15.206 } 00:15:15.206 } 00:15:15.206 ]' 00:15:15.206 11:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:15.206 11:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:15.206 11:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:15.464 11:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:15.464 11:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:15.464 11:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.464 11:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.464 11:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.721 11:19:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmFhMWNhMjgyODM2YzA3MmVlZWZhNmIwZjMzNTVkOTFjYTk2YjFjMzY4MDdhZjhkPIy24A==: --dhchap-ctrl-secret DHHC-1:03:Yjg4ZWI0NjU5NjVhYWY1NWNhZGEyZjBkMTYxNmY5YTRiMmUyYjNlNzc0NmQ3NzZiNTJjYjExMWVmMjhjYzdkN4xBKq4=: 00:15:16.654 11:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.654 11:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:16.654 11:19:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.654 11:19:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.654 11:19:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.654 11:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:16.654 11:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:16.654 11:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:16.912 11:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:15:16.912 11:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:16.912 11:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:16.912 11:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:16.912 11:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:16.912 11:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.912 11:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.912 11:19:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.912 11:19:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.912 11:19:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.912 11:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.912 11:19:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.475 00:15:17.475 11:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:17.475 11:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:17.475 11:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.733 11:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.733 11:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.733 11:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.733 11:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.733 11:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.733 11:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:17.733 { 00:15:17.733 "cntlid": 83, 00:15:17.733 "qid": 0, 00:15:17.733 "state": "enabled", 00:15:17.733 "thread": "nvmf_tgt_poll_group_000", 00:15:17.733 "listen_address": { 00:15:17.733 "trtype": "TCP", 00:15:17.733 "adrfam": "IPv4", 00:15:17.733 "traddr": "10.0.0.2", 00:15:17.733 "trsvcid": "4420" 00:15:17.733 }, 00:15:17.733 "peer_address": { 00:15:17.733 "trtype": "TCP", 00:15:17.733 "adrfam": "IPv4", 00:15:17.733 "traddr": "10.0.0.1", 00:15:17.733 "trsvcid": "35042" 00:15:17.733 }, 00:15:17.733 "auth": { 00:15:17.733 "state": "completed", 00:15:17.733 "digest": "sha384", 00:15:17.733 "dhgroup": "ffdhe6144" 00:15:17.733 } 00:15:17.733 } 00:15:17.733 ]' 00:15:17.733 11:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:17.733 11:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:17.733 11:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:17.733 11:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:17.733 11:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:17.733 11:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.733 11:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.733 11:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.991 11:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTNmZDU5ODg4NTA5N2Y2ZDE2N2JmMzQyNzM4ZjYwN2IsnuM6: --dhchap-ctrl-secret DHHC-1:02:YjQ3YjkwYzNkZTcyNzRjNjNhMDMzNWZjZjI0MmU5ZTI2MGE0ZTk0YWU2MzlmNzBisRFJmg==: 00:15:18.923 11:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.923 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.923 11:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:18.923 11:19:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.923 11:19:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.923 11:19:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.923 11:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:18.923 11:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:18.923 11:19:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:19.180 11:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:15:19.180 11:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:19.180 11:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:19.180 11:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:19.180 11:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:19.180 11:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.180 11:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.180 11:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.180 11:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.180 11:19:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.180 11:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.180 11:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.745 00:15:19.745 11:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:19.745 11:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:19.745 11:19:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.002 11:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.002 11:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.002 11:19:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.002 11:19:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.002 11:19:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.002 11:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:20.002 { 00:15:20.002 "cntlid": 85, 00:15:20.002 "qid": 0, 00:15:20.002 "state": "enabled", 00:15:20.002 "thread": "nvmf_tgt_poll_group_000", 00:15:20.002 "listen_address": { 00:15:20.002 "trtype": "TCP", 00:15:20.002 "adrfam": "IPv4", 00:15:20.002 "traddr": "10.0.0.2", 00:15:20.002 "trsvcid": "4420" 00:15:20.002 }, 00:15:20.002 "peer_address": { 00:15:20.002 "trtype": "TCP", 00:15:20.002 "adrfam": "IPv4", 00:15:20.002 "traddr": "10.0.0.1", 00:15:20.002 "trsvcid": "35074" 00:15:20.002 }, 00:15:20.002 "auth": { 00:15:20.002 "state": "completed", 00:15:20.002 "digest": "sha384", 00:15:20.002 "dhgroup": "ffdhe6144" 00:15:20.002 } 00:15:20.002 } 00:15:20.002 ]' 00:15:20.002 11:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:20.002 11:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:20.002 11:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:20.002 11:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:20.002 11:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:20.260 11:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.260 11:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.260 11:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.517 11:19:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2YyY2Q1ZTVkM2U4Yzk4ZGMyYzJhMzIyNTIxYWRlZDk2NzJkZmY1ZjVmY2U0OGNjaji5Jw==: --dhchap-ctrl-secret DHHC-1:01:ZTNjYWVmZDg1OTdhN2FlYjNjZmE2ZmExY2UyMTY3OGWGvBGn: 00:15:21.451 11:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.451 11:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:21.451 11:19:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.451 11:19:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.451 11:19:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.451 11:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:21.451 11:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:21.451 11:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:21.451 11:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:15:21.451 11:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:21.451 11:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:21.451 11:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:21.451 11:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:21.451 11:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.451 11:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:21.451 11:19:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.451 11:19:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.451 11:19:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.451 11:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:21.451 11:19:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:22.015 00:15:22.015 11:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:22.015 11:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:22.015 11:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.272 11:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.272 11:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.272 11:19:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.272 11:19:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.272 11:19:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.272 11:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:22.272 { 00:15:22.272 "cntlid": 87, 00:15:22.272 "qid": 0, 00:15:22.272 "state": "enabled", 00:15:22.272 "thread": "nvmf_tgt_poll_group_000", 00:15:22.272 "listen_address": { 00:15:22.272 "trtype": "TCP", 00:15:22.272 "adrfam": "IPv4", 00:15:22.272 "traddr": "10.0.0.2", 00:15:22.272 "trsvcid": "4420" 00:15:22.272 }, 00:15:22.272 "peer_address": { 00:15:22.272 "trtype": "TCP", 00:15:22.272 "adrfam": "IPv4", 00:15:22.272 "traddr": "10.0.0.1", 00:15:22.272 "trsvcid": "59072" 00:15:22.272 }, 00:15:22.272 "auth": { 00:15:22.272 "state": "completed", 00:15:22.272 "digest": "sha384", 00:15:22.272 "dhgroup": "ffdhe6144" 00:15:22.272 } 00:15:22.272 } 00:15:22.272 ]' 00:15:22.272 11:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:22.529 11:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:22.529 11:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:22.529 11:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:22.529 11:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:22.529 11:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.529 11:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.529 11:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.787 11:19:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmU5Zjk2YjVhZDRjNzhkYjJiY2UyMDkzOTEwZjNlMWVjNTVlMTk5NWJhMDU4NTE4M2I5ODEwNjI3NTRiYjMxNR7UyM4=: 00:15:23.719 11:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.719 11:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:23.719 11:19:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.719 11:19:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.719 11:19:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.719 11:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:23.719 11:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:23.719 11:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:23.719 11:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:23.977 11:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:15:23.977 11:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:23.977 11:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:23.977 11:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:23.977 11:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:23.977 11:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.977 11:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.977 11:19:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.977 11:19:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.977 11:19:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.977 11:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.977 11:19:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.910 00:15:24.910 11:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:24.910 11:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:24.910 11:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.910 11:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.910 11:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.910 11:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.910 11:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.910 11:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.910 11:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:24.910 { 00:15:24.910 "cntlid": 89, 00:15:24.910 "qid": 0, 00:15:24.910 "state": "enabled", 00:15:24.910 "thread": "nvmf_tgt_poll_group_000", 00:15:24.910 "listen_address": { 00:15:24.910 "trtype": "TCP", 00:15:24.910 "adrfam": "IPv4", 00:15:24.910 "traddr": "10.0.0.2", 00:15:24.910 "trsvcid": "4420" 00:15:24.910 }, 00:15:24.910 "peer_address": { 00:15:24.911 "trtype": "TCP", 00:15:24.911 "adrfam": "IPv4", 00:15:24.911 "traddr": "10.0.0.1", 00:15:24.911 "trsvcid": "59088" 00:15:24.911 }, 00:15:24.911 "auth": { 00:15:24.911 "state": "completed", 00:15:24.911 "digest": "sha384", 00:15:24.911 "dhgroup": "ffdhe8192" 00:15:24.911 } 00:15:24.911 } 00:15:24.911 ]' 00:15:24.911 11:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:24.911 11:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:24.911 11:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:25.167 11:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:25.167 11:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:25.167 11:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.167 11:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.167 11:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.424 11:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmFhMWNhMjgyODM2YzA3MmVlZWZhNmIwZjMzNTVkOTFjYTk2YjFjMzY4MDdhZjhkPIy24A==: --dhchap-ctrl-secret DHHC-1:03:Yjg4ZWI0NjU5NjVhYWY1NWNhZGEyZjBkMTYxNmY5YTRiMmUyYjNlNzc0NmQ3NzZiNTJjYjExMWVmMjhjYzdkN4xBKq4=: 00:15:26.356 11:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.356 11:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:26.356 11:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.356 11:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.356 11:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.356 11:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:26.356 11:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:26.356 11:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:26.357 11:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:15:26.357 11:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:26.357 11:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:26.357 11:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:26.357 11:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:26.357 11:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.357 11:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.357 11:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.357 11:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.357 11:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.357 11:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.357 11:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.288 00:15:27.288 11:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:27.288 11:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:27.288 11:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.546 11:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.546 11:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.546 11:19:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.546 11:19:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.546 11:19:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.546 11:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:27.546 { 00:15:27.546 "cntlid": 91, 00:15:27.546 "qid": 0, 00:15:27.546 "state": "enabled", 00:15:27.546 "thread": "nvmf_tgt_poll_group_000", 00:15:27.546 "listen_address": { 00:15:27.546 "trtype": "TCP", 00:15:27.546 "adrfam": "IPv4", 00:15:27.546 "traddr": "10.0.0.2", 00:15:27.546 "trsvcid": "4420" 00:15:27.546 }, 00:15:27.546 "peer_address": { 00:15:27.546 "trtype": "TCP", 00:15:27.546 "adrfam": "IPv4", 00:15:27.546 "traddr": "10.0.0.1", 00:15:27.546 "trsvcid": "59106" 00:15:27.546 }, 00:15:27.546 "auth": { 00:15:27.546 "state": "completed", 00:15:27.546 "digest": "sha384", 00:15:27.546 "dhgroup": "ffdhe8192" 00:15:27.546 } 00:15:27.546 } 00:15:27.546 ]' 00:15:27.546 11:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:27.546 11:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:27.546 11:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:27.546 11:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:27.546 11:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:27.546 11:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.546 11:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.546 11:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.803 11:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTNmZDU5ODg4NTA5N2Y2ZDE2N2JmMzQyNzM4ZjYwN2IsnuM6: --dhchap-ctrl-secret DHHC-1:02:YjQ3YjkwYzNkZTcyNzRjNjNhMDMzNWZjZjI0MmU5ZTI2MGE0ZTk0YWU2MzlmNzBisRFJmg==: 00:15:28.736 11:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.736 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.736 11:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:28.736 11:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.736 11:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.736 11:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.736 11:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:28.736 11:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:28.736 11:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:29.303 11:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:15:29.303 11:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:29.303 11:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:29.303 11:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:29.303 11:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:29.303 11:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.303 11:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.303 11:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.303 11:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.303 11:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.303 11:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.303 11:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.866 00:15:29.866 11:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:29.867 11:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:29.867 11:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.123 11:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.123 11:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.123 11:19:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.123 11:19:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.123 11:19:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.123 11:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:30.123 { 00:15:30.123 "cntlid": 93, 00:15:30.123 "qid": 0, 00:15:30.123 "state": "enabled", 00:15:30.123 "thread": "nvmf_tgt_poll_group_000", 00:15:30.123 "listen_address": { 00:15:30.123 "trtype": "TCP", 00:15:30.123 "adrfam": "IPv4", 00:15:30.123 "traddr": "10.0.0.2", 00:15:30.123 "trsvcid": "4420" 00:15:30.123 }, 00:15:30.123 "peer_address": { 00:15:30.123 "trtype": "TCP", 00:15:30.123 "adrfam": "IPv4", 00:15:30.123 "traddr": "10.0.0.1", 00:15:30.123 "trsvcid": "59134" 00:15:30.123 }, 00:15:30.123 "auth": { 00:15:30.123 "state": "completed", 00:15:30.123 "digest": "sha384", 00:15:30.123 "dhgroup": "ffdhe8192" 00:15:30.123 } 00:15:30.123 } 00:15:30.123 ]' 00:15:30.123 11:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:30.381 11:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:30.381 11:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:30.381 11:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:30.381 11:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:30.381 11:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.381 11:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.381 11:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.639 11:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2YyY2Q1ZTVkM2U4Yzk4ZGMyYzJhMzIyNTIxYWRlZDk2NzJkZmY1ZjVmY2U0OGNjaji5Jw==: --dhchap-ctrl-secret DHHC-1:01:ZTNjYWVmZDg1OTdhN2FlYjNjZmE2ZmExY2UyMTY3OGWGvBGn: 00:15:31.572 11:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.572 11:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:31.572 11:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.572 11:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.572 11:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.572 11:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:31.572 11:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:31.572 11:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:31.830 11:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:15:31.830 11:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:31.830 11:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:31.830 11:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:31.830 11:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:31.830 11:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.830 11:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:31.830 11:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.830 11:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.830 11:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.830 11:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:31.830 11:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:32.763 00:15:32.763 11:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:32.763 11:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:32.763 11:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.764 11:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.764 11:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.764 11:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.764 11:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.764 11:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.764 11:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:32.764 { 00:15:32.764 "cntlid": 95, 00:15:32.764 "qid": 0, 00:15:32.764 "state": "enabled", 00:15:32.764 "thread": "nvmf_tgt_poll_group_000", 00:15:32.764 "listen_address": { 00:15:32.764 "trtype": "TCP", 00:15:32.764 "adrfam": "IPv4", 00:15:32.764 "traddr": "10.0.0.2", 00:15:32.764 "trsvcid": "4420" 00:15:32.764 }, 00:15:32.764 "peer_address": { 00:15:32.764 "trtype": "TCP", 00:15:32.764 "adrfam": "IPv4", 00:15:32.764 "traddr": "10.0.0.1", 00:15:32.764 "trsvcid": "59160" 00:15:32.764 }, 00:15:32.764 "auth": { 00:15:32.764 "state": "completed", 00:15:32.764 "digest": "sha384", 00:15:32.764 "dhgroup": "ffdhe8192" 00:15:32.764 } 00:15:32.764 } 00:15:32.764 ]' 00:15:32.764 11:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:33.021 11:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:33.021 11:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:33.021 11:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:33.021 11:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:33.021 11:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.021 11:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.021 11:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.279 11:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmU5Zjk2YjVhZDRjNzhkYjJiY2UyMDkzOTEwZjNlMWVjNTVlMTk5NWJhMDU4NTE4M2I5ODEwNjI3NTRiYjMxNR7UyM4=: 00:15:34.213 11:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.213 11:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:34.213 11:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.213 11:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.213 11:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.213 11:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:34.213 11:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:34.213 11:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:34.213 11:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:34.213 11:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:34.472 11:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:15:34.472 11:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:34.472 11:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:34.472 11:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:34.472 11:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:34.472 11:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.472 11:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.472 11:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.472 11:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.472 11:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.472 11:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.472 11:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.762 00:15:34.762 11:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:34.762 11:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:34.762 11:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.043 11:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.043 11:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.043 11:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.043 11:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.043 11:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.043 11:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:35.043 { 00:15:35.043 "cntlid": 97, 00:15:35.043 "qid": 0, 00:15:35.043 "state": "enabled", 00:15:35.043 "thread": "nvmf_tgt_poll_group_000", 00:15:35.043 "listen_address": { 00:15:35.043 "trtype": "TCP", 00:15:35.043 "adrfam": "IPv4", 00:15:35.043 "traddr": "10.0.0.2", 00:15:35.043 "trsvcid": "4420" 00:15:35.043 }, 00:15:35.043 "peer_address": { 00:15:35.043 "trtype": "TCP", 00:15:35.043 "adrfam": "IPv4", 00:15:35.043 "traddr": "10.0.0.1", 00:15:35.043 "trsvcid": "59198" 00:15:35.043 }, 00:15:35.043 "auth": { 00:15:35.043 "state": "completed", 00:15:35.043 "digest": "sha512", 00:15:35.043 "dhgroup": "null" 00:15:35.043 } 00:15:35.043 } 00:15:35.043 ]' 00:15:35.044 11:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:35.044 11:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:35.044 11:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:35.301 11:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:35.301 11:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:35.301 11:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.301 11:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.301 11:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.558 11:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmFhMWNhMjgyODM2YzA3MmVlZWZhNmIwZjMzNTVkOTFjYTk2YjFjMzY4MDdhZjhkPIy24A==: --dhchap-ctrl-secret DHHC-1:03:Yjg4ZWI0NjU5NjVhYWY1NWNhZGEyZjBkMTYxNmY5YTRiMmUyYjNlNzc0NmQ3NzZiNTJjYjExMWVmMjhjYzdkN4xBKq4=: 00:15:36.490 11:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.490 11:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:36.490 11:20:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.490 11:20:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.490 11:20:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.490 11:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:36.490 11:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:36.490 11:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:36.490 11:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:15:36.490 11:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:36.490 11:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:36.490 11:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:36.490 11:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:36.490 11:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.490 11:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.490 11:20:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.490 11:20:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.490 11:20:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.750 11:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.750 11:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.011 00:15:37.011 11:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:37.011 11:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.011 11:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:37.278 11:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.278 11:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.278 11:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.278 11:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.278 11:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.278 11:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:37.278 { 00:15:37.278 "cntlid": 99, 00:15:37.278 "qid": 0, 00:15:37.278 "state": "enabled", 00:15:37.278 "thread": "nvmf_tgt_poll_group_000", 00:15:37.278 "listen_address": { 00:15:37.278 "trtype": "TCP", 00:15:37.278 "adrfam": "IPv4", 00:15:37.278 "traddr": "10.0.0.2", 00:15:37.278 "trsvcid": "4420" 00:15:37.278 }, 00:15:37.278 "peer_address": { 00:15:37.278 "trtype": "TCP", 00:15:37.278 "adrfam": "IPv4", 00:15:37.278 "traddr": "10.0.0.1", 00:15:37.278 "trsvcid": "59228" 00:15:37.278 }, 00:15:37.278 "auth": { 00:15:37.278 "state": "completed", 00:15:37.278 "digest": "sha512", 00:15:37.278 "dhgroup": "null" 00:15:37.278 } 00:15:37.278 } 00:15:37.278 ]' 00:15:37.278 11:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:37.278 11:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:37.278 11:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:37.278 11:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:37.278 11:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:37.278 11:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.278 11:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.278 11:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.536 11:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTNmZDU5ODg4NTA5N2Y2ZDE2N2JmMzQyNzM4ZjYwN2IsnuM6: --dhchap-ctrl-secret DHHC-1:02:YjQ3YjkwYzNkZTcyNzRjNjNhMDMzNWZjZjI0MmU5ZTI2MGE0ZTk0YWU2MzlmNzBisRFJmg==: 00:15:38.470 11:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.470 11:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:38.470 11:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.470 11:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.470 11:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.470 11:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:38.470 11:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:38.470 11:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:38.728 11:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:15:38.728 11:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:38.728 11:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:38.728 11:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:38.728 11:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:38.728 11:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.728 11:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.728 11:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.728 11:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.728 11:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.728 11:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.728 11:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.985 00:15:38.985 11:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:38.985 11:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:38.985 11:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.241 11:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.241 11:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.241 11:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.241 11:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.241 11:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.241 11:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:39.241 { 00:15:39.241 "cntlid": 101, 00:15:39.241 "qid": 0, 00:15:39.241 "state": "enabled", 00:15:39.241 "thread": "nvmf_tgt_poll_group_000", 00:15:39.241 "listen_address": { 00:15:39.241 "trtype": "TCP", 00:15:39.241 "adrfam": "IPv4", 00:15:39.241 "traddr": "10.0.0.2", 00:15:39.241 "trsvcid": "4420" 00:15:39.241 }, 00:15:39.241 "peer_address": { 00:15:39.241 "trtype": "TCP", 00:15:39.241 "adrfam": "IPv4", 00:15:39.241 "traddr": "10.0.0.1", 00:15:39.241 "trsvcid": "59248" 00:15:39.241 }, 00:15:39.241 "auth": { 00:15:39.241 "state": "completed", 00:15:39.241 "digest": "sha512", 00:15:39.241 "dhgroup": "null" 00:15:39.241 } 00:15:39.241 } 00:15:39.241 ]' 00:15:39.241 11:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:39.498 11:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:39.498 11:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:39.498 11:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:39.498 11:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:39.498 11:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.498 11:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.498 11:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.755 11:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2YyY2Q1ZTVkM2U4Yzk4ZGMyYzJhMzIyNTIxYWRlZDk2NzJkZmY1ZjVmY2U0OGNjaji5Jw==: --dhchap-ctrl-secret DHHC-1:01:ZTNjYWVmZDg1OTdhN2FlYjNjZmE2ZmExY2UyMTY3OGWGvBGn: 00:15:40.689 11:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.689 11:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:40.689 11:20:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.689 11:20:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.689 11:20:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.689 11:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:40.689 11:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:40.689 11:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:40.947 11:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:15:40.947 11:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:40.947 11:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:40.947 11:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:40.947 11:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:40.947 11:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.947 11:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:40.947 11:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.947 11:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.947 11:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.947 11:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:40.947 11:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:41.512 00:15:41.512 11:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:41.512 11:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.512 11:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:41.512 11:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.512 11:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.512 11:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.512 11:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.512 11:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.512 11:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:41.512 { 00:15:41.512 "cntlid": 103, 00:15:41.512 "qid": 0, 00:15:41.512 "state": "enabled", 00:15:41.512 "thread": "nvmf_tgt_poll_group_000", 00:15:41.512 "listen_address": { 00:15:41.512 "trtype": "TCP", 00:15:41.512 "adrfam": "IPv4", 00:15:41.512 "traddr": "10.0.0.2", 00:15:41.512 "trsvcid": "4420" 00:15:41.512 }, 00:15:41.512 "peer_address": { 00:15:41.512 "trtype": "TCP", 00:15:41.512 "adrfam": "IPv4", 00:15:41.512 "traddr": "10.0.0.1", 00:15:41.512 "trsvcid": "59282" 00:15:41.512 }, 00:15:41.512 "auth": { 00:15:41.512 "state": "completed", 00:15:41.512 "digest": "sha512", 00:15:41.512 "dhgroup": "null" 00:15:41.512 } 00:15:41.512 } 00:15:41.512 ]' 00:15:41.512 11:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:41.512 11:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:41.512 11:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:41.770 11:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:41.770 11:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:41.770 11:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.770 11:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.770 11:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.027 11:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmU5Zjk2YjVhZDRjNzhkYjJiY2UyMDkzOTEwZjNlMWVjNTVlMTk5NWJhMDU4NTE4M2I5ODEwNjI3NTRiYjMxNR7UyM4=: 00:15:42.959 11:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.959 11:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:42.959 11:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.959 11:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.959 11:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.959 11:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:42.959 11:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:42.959 11:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:42.959 11:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:43.217 11:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:15:43.217 11:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:43.217 11:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:43.217 11:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:43.217 11:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:43.217 11:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.217 11:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.217 11:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.217 11:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.217 11:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.217 11:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.217 11:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.473 00:15:43.473 11:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:43.474 11:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:43.474 11:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.730 11:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.730 11:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.730 11:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.730 11:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.730 11:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.730 11:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:43.730 { 00:15:43.730 "cntlid": 105, 00:15:43.730 "qid": 0, 00:15:43.730 "state": "enabled", 00:15:43.730 "thread": "nvmf_tgt_poll_group_000", 00:15:43.730 "listen_address": { 00:15:43.730 "trtype": "TCP", 00:15:43.730 "adrfam": "IPv4", 00:15:43.730 "traddr": "10.0.0.2", 00:15:43.730 "trsvcid": "4420" 00:15:43.730 }, 00:15:43.730 "peer_address": { 00:15:43.730 "trtype": "TCP", 00:15:43.730 "adrfam": "IPv4", 00:15:43.730 "traddr": "10.0.0.1", 00:15:43.730 "trsvcid": "32852" 00:15:43.730 }, 00:15:43.730 "auth": { 00:15:43.730 "state": "completed", 00:15:43.730 "digest": "sha512", 00:15:43.730 "dhgroup": "ffdhe2048" 00:15:43.730 } 00:15:43.730 } 00:15:43.730 ]' 00:15:43.730 11:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:43.730 11:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:43.730 11:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:43.987 11:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:43.987 11:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:43.987 11:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.987 11:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.987 11:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.244 11:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmFhMWNhMjgyODM2YzA3MmVlZWZhNmIwZjMzNTVkOTFjYTk2YjFjMzY4MDdhZjhkPIy24A==: --dhchap-ctrl-secret DHHC-1:03:Yjg4ZWI0NjU5NjVhYWY1NWNhZGEyZjBkMTYxNmY5YTRiMmUyYjNlNzc0NmQ3NzZiNTJjYjExMWVmMjhjYzdkN4xBKq4=: 00:15:45.200 11:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.200 11:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:45.200 11:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.200 11:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.200 11:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.200 11:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:45.200 11:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:45.200 11:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:45.457 11:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:15:45.457 11:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:45.457 11:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:45.457 11:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:45.457 11:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:45.457 11:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.457 11:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.457 11:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.457 11:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.457 11:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.457 11:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.457 11:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.713 00:15:45.713 11:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:45.713 11:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:45.713 11:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.971 11:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.971 11:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.971 11:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.971 11:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.971 11:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.971 11:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:45.971 { 00:15:45.971 "cntlid": 107, 00:15:45.971 "qid": 0, 00:15:45.971 "state": "enabled", 00:15:45.971 "thread": "nvmf_tgt_poll_group_000", 00:15:45.971 "listen_address": { 00:15:45.971 "trtype": "TCP", 00:15:45.971 "adrfam": "IPv4", 00:15:45.971 "traddr": "10.0.0.2", 00:15:45.971 "trsvcid": "4420" 00:15:45.971 }, 00:15:45.971 "peer_address": { 00:15:45.971 "trtype": "TCP", 00:15:45.971 "adrfam": "IPv4", 00:15:45.971 "traddr": "10.0.0.1", 00:15:45.971 "trsvcid": "32886" 00:15:45.971 }, 00:15:45.971 "auth": { 00:15:45.971 "state": "completed", 00:15:45.971 "digest": "sha512", 00:15:45.971 "dhgroup": "ffdhe2048" 00:15:45.971 } 00:15:45.971 } 00:15:45.971 ]' 00:15:45.971 11:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:45.971 11:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:45.971 11:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:45.971 11:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:45.971 11:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:46.229 11:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.229 11:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.229 11:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.486 11:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTNmZDU5ODg4NTA5N2Y2ZDE2N2JmMzQyNzM4ZjYwN2IsnuM6: --dhchap-ctrl-secret DHHC-1:02:YjQ3YjkwYzNkZTcyNzRjNjNhMDMzNWZjZjI0MmU5ZTI2MGE0ZTk0YWU2MzlmNzBisRFJmg==: 00:15:47.417 11:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.417 11:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:47.417 11:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.417 11:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.417 11:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.417 11:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:47.417 11:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:47.417 11:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:47.417 11:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:15:47.417 11:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:47.417 11:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:47.417 11:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:47.417 11:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:47.417 11:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.417 11:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.417 11:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.417 11:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.417 11:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.417 11:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.417 11:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.675 00:15:47.934 11:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:47.934 11:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:47.934 11:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.934 11:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.934 11:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.934 11:20:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.934 11:20:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.934 11:20:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.934 11:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:47.934 { 00:15:47.934 "cntlid": 109, 00:15:47.934 "qid": 0, 00:15:47.934 "state": "enabled", 00:15:47.934 "thread": "nvmf_tgt_poll_group_000", 00:15:47.934 "listen_address": { 00:15:47.934 "trtype": "TCP", 00:15:47.934 "adrfam": "IPv4", 00:15:47.934 "traddr": "10.0.0.2", 00:15:47.934 "trsvcid": "4420" 00:15:47.934 }, 00:15:47.934 "peer_address": { 00:15:47.934 "trtype": "TCP", 00:15:47.934 "adrfam": "IPv4", 00:15:47.934 "traddr": "10.0.0.1", 00:15:47.934 "trsvcid": "32914" 00:15:47.934 }, 00:15:47.934 "auth": { 00:15:47.934 "state": "completed", 00:15:47.934 "digest": "sha512", 00:15:47.934 "dhgroup": "ffdhe2048" 00:15:47.934 } 00:15:47.934 } 00:15:47.934 ]' 00:15:48.192 11:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:48.192 11:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:48.192 11:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:48.192 11:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:48.192 11:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:48.192 11:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.192 11:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.192 11:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.450 11:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2YyY2Q1ZTVkM2U4Yzk4ZGMyYzJhMzIyNTIxYWRlZDk2NzJkZmY1ZjVmY2U0OGNjaji5Jw==: --dhchap-ctrl-secret DHHC-1:01:ZTNjYWVmZDg1OTdhN2FlYjNjZmE2ZmExY2UyMTY3OGWGvBGn: 00:15:49.384 11:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.384 11:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:49.384 11:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.384 11:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.384 11:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.384 11:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:49.384 11:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:49.384 11:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:49.642 11:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:15:49.642 11:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:49.642 11:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:49.642 11:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:49.642 11:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:49.642 11:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.642 11:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:49.642 11:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.642 11:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.642 11:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.642 11:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:49.642 11:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:49.900 00:15:49.900 11:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:49.900 11:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:49.900 11:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.157 11:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.157 11:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.157 11:20:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.157 11:20:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.157 11:20:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.157 11:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:50.157 { 00:15:50.157 "cntlid": 111, 00:15:50.157 "qid": 0, 00:15:50.157 "state": "enabled", 00:15:50.157 "thread": "nvmf_tgt_poll_group_000", 00:15:50.157 "listen_address": { 00:15:50.157 "trtype": "TCP", 00:15:50.157 "adrfam": "IPv4", 00:15:50.157 "traddr": "10.0.0.2", 00:15:50.157 "trsvcid": "4420" 00:15:50.157 }, 00:15:50.157 "peer_address": { 00:15:50.157 "trtype": "TCP", 00:15:50.157 "adrfam": "IPv4", 00:15:50.157 "traddr": "10.0.0.1", 00:15:50.157 "trsvcid": "32942" 00:15:50.157 }, 00:15:50.157 "auth": { 00:15:50.157 "state": "completed", 00:15:50.157 "digest": "sha512", 00:15:50.157 "dhgroup": "ffdhe2048" 00:15:50.157 } 00:15:50.157 } 00:15:50.157 ]' 00:15:50.157 11:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:50.157 11:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:50.157 11:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:50.157 11:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:50.157 11:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:50.415 11:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.415 11:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.415 11:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.673 11:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmU5Zjk2YjVhZDRjNzhkYjJiY2UyMDkzOTEwZjNlMWVjNTVlMTk5NWJhMDU4NTE4M2I5ODEwNjI3NTRiYjMxNR7UyM4=: 00:15:51.607 11:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.607 11:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:51.607 11:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.607 11:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.607 11:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.607 11:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:51.607 11:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:51.607 11:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:51.607 11:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:51.607 11:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:15:51.607 11:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:51.607 11:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:51.607 11:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:51.607 11:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:51.607 11:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.607 11:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.607 11:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.607 11:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.607 11:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.607 11:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.607 11:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.865 00:15:51.865 11:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:51.865 11:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:51.865 11:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.124 11:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.124 11:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.124 11:20:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.124 11:20:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.124 11:20:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.124 11:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:52.124 { 00:15:52.124 "cntlid": 113, 00:15:52.124 "qid": 0, 00:15:52.124 "state": "enabled", 00:15:52.124 "thread": "nvmf_tgt_poll_group_000", 00:15:52.124 "listen_address": { 00:15:52.124 "trtype": "TCP", 00:15:52.124 "adrfam": "IPv4", 00:15:52.124 "traddr": "10.0.0.2", 00:15:52.124 "trsvcid": "4420" 00:15:52.124 }, 00:15:52.124 "peer_address": { 00:15:52.124 "trtype": "TCP", 00:15:52.124 "adrfam": "IPv4", 00:15:52.124 "traddr": "10.0.0.1", 00:15:52.124 "trsvcid": "53266" 00:15:52.124 }, 00:15:52.124 "auth": { 00:15:52.124 "state": "completed", 00:15:52.124 "digest": "sha512", 00:15:52.124 "dhgroup": "ffdhe3072" 00:15:52.124 } 00:15:52.124 } 00:15:52.124 ]' 00:15:52.124 11:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:52.397 11:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:52.397 11:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:52.397 11:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:52.397 11:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:52.397 11:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.397 11:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.397 11:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.659 11:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmFhMWNhMjgyODM2YzA3MmVlZWZhNmIwZjMzNTVkOTFjYTk2YjFjMzY4MDdhZjhkPIy24A==: --dhchap-ctrl-secret DHHC-1:03:Yjg4ZWI0NjU5NjVhYWY1NWNhZGEyZjBkMTYxNmY5YTRiMmUyYjNlNzc0NmQ3NzZiNTJjYjExMWVmMjhjYzdkN4xBKq4=: 00:15:53.590 11:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.590 11:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:53.590 11:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.590 11:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.590 11:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.590 11:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:53.590 11:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:53.590 11:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:53.848 11:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:15:53.848 11:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:53.848 11:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:53.848 11:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:53.848 11:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:53.848 11:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.848 11:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.848 11:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.848 11:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.848 11:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.848 11:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.848 11:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.106 00:15:54.106 11:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:54.106 11:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.106 11:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:54.364 11:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.364 11:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.364 11:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.364 11:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.364 11:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.364 11:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:54.364 { 00:15:54.364 "cntlid": 115, 00:15:54.364 "qid": 0, 00:15:54.364 "state": "enabled", 00:15:54.364 "thread": "nvmf_tgt_poll_group_000", 00:15:54.364 "listen_address": { 00:15:54.364 "trtype": "TCP", 00:15:54.364 "adrfam": "IPv4", 00:15:54.364 "traddr": "10.0.0.2", 00:15:54.364 "trsvcid": "4420" 00:15:54.364 }, 00:15:54.364 "peer_address": { 00:15:54.364 "trtype": "TCP", 00:15:54.364 "adrfam": "IPv4", 00:15:54.364 "traddr": "10.0.0.1", 00:15:54.364 "trsvcid": "53304" 00:15:54.364 }, 00:15:54.364 "auth": { 00:15:54.364 "state": "completed", 00:15:54.364 "digest": "sha512", 00:15:54.364 "dhgroup": "ffdhe3072" 00:15:54.364 } 00:15:54.364 } 00:15:54.364 ]' 00:15:54.364 11:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:54.364 11:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:54.364 11:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:54.364 11:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:54.364 11:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:54.622 11:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.622 11:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.622 11:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.879 11:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTNmZDU5ODg4NTA5N2Y2ZDE2N2JmMzQyNzM4ZjYwN2IsnuM6: --dhchap-ctrl-secret DHHC-1:02:YjQ3YjkwYzNkZTcyNzRjNjNhMDMzNWZjZjI0MmU5ZTI2MGE0ZTk0YWU2MzlmNzBisRFJmg==: 00:15:55.845 11:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.845 11:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:55.845 11:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.845 11:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.845 11:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.845 11:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:55.845 11:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:55.845 11:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:56.130 11:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:15:56.130 11:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:56.130 11:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:56.130 11:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:56.130 11:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:56.130 11:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.130 11:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.130 11:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.130 11:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.130 11:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.130 11:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.130 11:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.387 00:15:56.387 11:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:56.387 11:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:56.387 11:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.645 11:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.645 11:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.645 11:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.645 11:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.645 11:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.645 11:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:56.645 { 00:15:56.645 "cntlid": 117, 00:15:56.645 "qid": 0, 00:15:56.645 "state": "enabled", 00:15:56.645 "thread": "nvmf_tgt_poll_group_000", 00:15:56.645 "listen_address": { 00:15:56.645 "trtype": "TCP", 00:15:56.645 "adrfam": "IPv4", 00:15:56.645 "traddr": "10.0.0.2", 00:15:56.645 "trsvcid": "4420" 00:15:56.645 }, 00:15:56.645 "peer_address": { 00:15:56.645 "trtype": "TCP", 00:15:56.645 "adrfam": "IPv4", 00:15:56.645 "traddr": "10.0.0.1", 00:15:56.645 "trsvcid": "53316" 00:15:56.645 }, 00:15:56.645 "auth": { 00:15:56.645 "state": "completed", 00:15:56.645 "digest": "sha512", 00:15:56.645 "dhgroup": "ffdhe3072" 00:15:56.645 } 00:15:56.645 } 00:15:56.645 ]' 00:15:56.645 11:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:56.645 11:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:56.645 11:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:56.645 11:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:56.645 11:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:56.645 11:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.645 11:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.645 11:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.902 11:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2YyY2Q1ZTVkM2U4Yzk4ZGMyYzJhMzIyNTIxYWRlZDk2NzJkZmY1ZjVmY2U0OGNjaji5Jw==: --dhchap-ctrl-secret DHHC-1:01:ZTNjYWVmZDg1OTdhN2FlYjNjZmE2ZmExY2UyMTY3OGWGvBGn: 00:15:57.833 11:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.833 11:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:57.833 11:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.833 11:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.833 11:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.833 11:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:57.833 11:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:57.833 11:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:58.091 11:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:15:58.091 11:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:58.091 11:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:58.091 11:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:58.091 11:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:58.091 11:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.091 11:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:58.091 11:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.091 11:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.091 11:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.091 11:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:58.091 11:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:58.657 00:15:58.657 11:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:58.657 11:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:58.657 11:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.915 11:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.915 11:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.915 11:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.915 11:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.915 11:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.915 11:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:58.915 { 00:15:58.915 "cntlid": 119, 00:15:58.915 "qid": 0, 00:15:58.915 "state": "enabled", 00:15:58.915 "thread": "nvmf_tgt_poll_group_000", 00:15:58.915 "listen_address": { 00:15:58.915 "trtype": "TCP", 00:15:58.915 "adrfam": "IPv4", 00:15:58.915 "traddr": "10.0.0.2", 00:15:58.915 "trsvcid": "4420" 00:15:58.915 }, 00:15:58.915 "peer_address": { 00:15:58.915 "trtype": "TCP", 00:15:58.915 "adrfam": "IPv4", 00:15:58.915 "traddr": "10.0.0.1", 00:15:58.915 "trsvcid": "53346" 00:15:58.915 }, 00:15:58.915 "auth": { 00:15:58.915 "state": "completed", 00:15:58.915 "digest": "sha512", 00:15:58.915 "dhgroup": "ffdhe3072" 00:15:58.915 } 00:15:58.915 } 00:15:58.915 ]' 00:15:58.915 11:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:58.915 11:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:58.915 11:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:58.915 11:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:58.915 11:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:58.915 11:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.915 11:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.915 11:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.172 11:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmU5Zjk2YjVhZDRjNzhkYjJiY2UyMDkzOTEwZjNlMWVjNTVlMTk5NWJhMDU4NTE4M2I5ODEwNjI3NTRiYjMxNR7UyM4=: 00:16:00.105 11:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.105 11:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:00.105 11:20:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.105 11:20:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.105 11:20:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.105 11:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:00.105 11:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:00.105 11:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:00.105 11:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:00.363 11:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:16:00.363 11:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:00.363 11:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:00.363 11:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:00.363 11:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:00.363 11:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.363 11:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.363 11:20:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.363 11:20:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.363 11:20:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.363 11:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.363 11:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.621 00:16:00.621 11:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:00.621 11:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:00.621 11:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.879 11:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.879 11:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.879 11:20:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.879 11:20:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.879 11:20:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.879 11:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:00.879 { 00:16:00.879 "cntlid": 121, 00:16:00.879 "qid": 0, 00:16:00.879 "state": "enabled", 00:16:00.879 "thread": "nvmf_tgt_poll_group_000", 00:16:00.879 "listen_address": { 00:16:00.879 "trtype": "TCP", 00:16:00.879 "adrfam": "IPv4", 00:16:00.879 "traddr": "10.0.0.2", 00:16:00.879 "trsvcid": "4420" 00:16:00.879 }, 00:16:00.879 "peer_address": { 00:16:00.879 "trtype": "TCP", 00:16:00.879 "adrfam": "IPv4", 00:16:00.879 "traddr": "10.0.0.1", 00:16:00.879 "trsvcid": "53382" 00:16:00.879 }, 00:16:00.879 "auth": { 00:16:00.879 "state": "completed", 00:16:00.879 "digest": "sha512", 00:16:00.879 "dhgroup": "ffdhe4096" 00:16:00.879 } 00:16:00.879 } 00:16:00.879 ]' 00:16:00.879 11:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:01.136 11:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:01.136 11:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:01.136 11:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:01.136 11:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:01.136 11:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.136 11:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.137 11:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.394 11:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmFhMWNhMjgyODM2YzA3MmVlZWZhNmIwZjMzNTVkOTFjYTk2YjFjMzY4MDdhZjhkPIy24A==: --dhchap-ctrl-secret DHHC-1:03:Yjg4ZWI0NjU5NjVhYWY1NWNhZGEyZjBkMTYxNmY5YTRiMmUyYjNlNzc0NmQ3NzZiNTJjYjExMWVmMjhjYzdkN4xBKq4=: 00:16:02.327 11:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.327 11:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:02.327 11:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.327 11:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.327 11:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.327 11:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:02.327 11:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:02.327 11:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:02.327 11:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:16:02.327 11:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:02.327 11:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:02.327 11:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:02.327 11:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:02.327 11:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.327 11:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.327 11:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.327 11:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.584 11:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.584 11:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.585 11:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.842 00:16:02.842 11:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:02.842 11:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:02.842 11:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.100 11:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.100 11:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.100 11:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.100 11:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.100 11:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.100 11:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:03.100 { 00:16:03.100 "cntlid": 123, 00:16:03.100 "qid": 0, 00:16:03.100 "state": "enabled", 00:16:03.100 "thread": "nvmf_tgt_poll_group_000", 00:16:03.100 "listen_address": { 00:16:03.100 "trtype": "TCP", 00:16:03.100 "adrfam": "IPv4", 00:16:03.100 "traddr": "10.0.0.2", 00:16:03.100 "trsvcid": "4420" 00:16:03.100 }, 00:16:03.100 "peer_address": { 00:16:03.100 "trtype": "TCP", 00:16:03.100 "adrfam": "IPv4", 00:16:03.100 "traddr": "10.0.0.1", 00:16:03.100 "trsvcid": "43374" 00:16:03.100 }, 00:16:03.100 "auth": { 00:16:03.100 "state": "completed", 00:16:03.100 "digest": "sha512", 00:16:03.100 "dhgroup": "ffdhe4096" 00:16:03.100 } 00:16:03.100 } 00:16:03.100 ]' 00:16:03.100 11:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:03.100 11:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:03.100 11:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:03.100 11:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:03.100 11:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:03.100 11:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.100 11:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.100 11:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.358 11:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTNmZDU5ODg4NTA5N2Y2ZDE2N2JmMzQyNzM4ZjYwN2IsnuM6: --dhchap-ctrl-secret DHHC-1:02:YjQ3YjkwYzNkZTcyNzRjNjNhMDMzNWZjZjI0MmU5ZTI2MGE0ZTk0YWU2MzlmNzBisRFJmg==: 00:16:04.292 11:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.292 11:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:04.292 11:20:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.292 11:20:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.292 11:20:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.292 11:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:04.292 11:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:04.292 11:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:04.550 11:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:16:04.550 11:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:04.550 11:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:04.550 11:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:04.550 11:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:04.550 11:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.550 11:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.550 11:20:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.550 11:20:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.550 11:20:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.550 11:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.550 11:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.116 00:16:05.116 11:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:05.116 11:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:05.116 11:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.116 11:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.116 11:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.116 11:20:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.116 11:20:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.116 11:20:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.116 11:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:05.116 { 00:16:05.116 "cntlid": 125, 00:16:05.116 "qid": 0, 00:16:05.116 "state": "enabled", 00:16:05.116 "thread": "nvmf_tgt_poll_group_000", 00:16:05.116 "listen_address": { 00:16:05.116 "trtype": "TCP", 00:16:05.116 "adrfam": "IPv4", 00:16:05.116 "traddr": "10.0.0.2", 00:16:05.116 "trsvcid": "4420" 00:16:05.116 }, 00:16:05.116 "peer_address": { 00:16:05.116 "trtype": "TCP", 00:16:05.116 "adrfam": "IPv4", 00:16:05.116 "traddr": "10.0.0.1", 00:16:05.116 "trsvcid": "43402" 00:16:05.116 }, 00:16:05.116 "auth": { 00:16:05.116 "state": "completed", 00:16:05.116 "digest": "sha512", 00:16:05.116 "dhgroup": "ffdhe4096" 00:16:05.116 } 00:16:05.116 } 00:16:05.116 ]' 00:16:05.116 11:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:05.374 11:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:05.374 11:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:05.374 11:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:05.374 11:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:05.374 11:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.374 11:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.374 11:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.632 11:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2YyY2Q1ZTVkM2U4Yzk4ZGMyYzJhMzIyNTIxYWRlZDk2NzJkZmY1ZjVmY2U0OGNjaji5Jw==: --dhchap-ctrl-secret DHHC-1:01:ZTNjYWVmZDg1OTdhN2FlYjNjZmE2ZmExY2UyMTY3OGWGvBGn: 00:16:06.564 11:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.564 11:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:06.564 11:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.564 11:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.564 11:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.564 11:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:06.564 11:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:06.564 11:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:06.821 11:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:16:06.821 11:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:06.821 11:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:06.821 11:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:06.821 11:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:06.821 11:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.821 11:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:06.821 11:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.821 11:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.821 11:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.821 11:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:06.821 11:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:07.385 00:16:07.385 11:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:07.385 11:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:07.385 11:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.643 11:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.643 11:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.643 11:20:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.643 11:20:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.643 11:20:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.643 11:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:07.643 { 00:16:07.643 "cntlid": 127, 00:16:07.643 "qid": 0, 00:16:07.643 "state": "enabled", 00:16:07.643 "thread": "nvmf_tgt_poll_group_000", 00:16:07.643 "listen_address": { 00:16:07.643 "trtype": "TCP", 00:16:07.643 "adrfam": "IPv4", 00:16:07.643 "traddr": "10.0.0.2", 00:16:07.643 "trsvcid": "4420" 00:16:07.643 }, 00:16:07.643 "peer_address": { 00:16:07.643 "trtype": "TCP", 00:16:07.643 "adrfam": "IPv4", 00:16:07.643 "traddr": "10.0.0.1", 00:16:07.643 "trsvcid": "43444" 00:16:07.643 }, 00:16:07.643 "auth": { 00:16:07.643 "state": "completed", 00:16:07.643 "digest": "sha512", 00:16:07.643 "dhgroup": "ffdhe4096" 00:16:07.643 } 00:16:07.643 } 00:16:07.643 ]' 00:16:07.643 11:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:07.643 11:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:07.643 11:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:07.643 11:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:07.643 11:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:07.643 11:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.643 11:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.643 11:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.901 11:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmU5Zjk2YjVhZDRjNzhkYjJiY2UyMDkzOTEwZjNlMWVjNTVlMTk5NWJhMDU4NTE4M2I5ODEwNjI3NTRiYjMxNR7UyM4=: 00:16:08.833 11:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.833 11:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:08.833 11:20:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.833 11:20:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.833 11:20:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.833 11:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:08.833 11:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:08.833 11:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:08.833 11:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:09.091 11:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:16:09.091 11:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:09.091 11:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:09.091 11:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:09.091 11:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:09.091 11:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.091 11:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.091 11:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.091 11:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.091 11:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.091 11:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.091 11:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.656 00:16:09.656 11:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:09.656 11:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:09.656 11:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.913 11:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.913 11:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.913 11:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.913 11:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.913 11:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.913 11:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:09.913 { 00:16:09.913 "cntlid": 129, 00:16:09.913 "qid": 0, 00:16:09.913 "state": "enabled", 00:16:09.913 "thread": "nvmf_tgt_poll_group_000", 00:16:09.913 "listen_address": { 00:16:09.913 "trtype": "TCP", 00:16:09.914 "adrfam": "IPv4", 00:16:09.914 "traddr": "10.0.0.2", 00:16:09.914 "trsvcid": "4420" 00:16:09.914 }, 00:16:09.914 "peer_address": { 00:16:09.914 "trtype": "TCP", 00:16:09.914 "adrfam": "IPv4", 00:16:09.914 "traddr": "10.0.0.1", 00:16:09.914 "trsvcid": "43480" 00:16:09.914 }, 00:16:09.914 "auth": { 00:16:09.914 "state": "completed", 00:16:09.914 "digest": "sha512", 00:16:09.914 "dhgroup": "ffdhe6144" 00:16:09.914 } 00:16:09.914 } 00:16:09.914 ]' 00:16:09.914 11:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:09.914 11:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:09.914 11:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:09.914 11:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:09.914 11:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:09.914 11:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.914 11:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.914 11:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.171 11:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmFhMWNhMjgyODM2YzA3MmVlZWZhNmIwZjMzNTVkOTFjYTk2YjFjMzY4MDdhZjhkPIy24A==: --dhchap-ctrl-secret DHHC-1:03:Yjg4ZWI0NjU5NjVhYWY1NWNhZGEyZjBkMTYxNmY5YTRiMmUyYjNlNzc0NmQ3NzZiNTJjYjExMWVmMjhjYzdkN4xBKq4=: 00:16:11.101 11:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.101 11:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:11.101 11:20:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.101 11:20:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.101 11:20:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.101 11:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:11.101 11:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:11.101 11:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:11.358 11:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:16:11.358 11:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:11.358 11:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:11.358 11:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:11.358 11:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:11.358 11:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.358 11:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.358 11:20:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.358 11:20:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.358 11:20:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.358 11:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.358 11:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.922 00:16:11.922 11:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:11.922 11:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:11.922 11:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.178 11:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.178 11:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.178 11:20:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.178 11:20:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.178 11:20:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.178 11:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:12.178 { 00:16:12.178 "cntlid": 131, 00:16:12.178 "qid": 0, 00:16:12.178 "state": "enabled", 00:16:12.178 "thread": "nvmf_tgt_poll_group_000", 00:16:12.178 "listen_address": { 00:16:12.178 "trtype": "TCP", 00:16:12.178 "adrfam": "IPv4", 00:16:12.178 "traddr": "10.0.0.2", 00:16:12.178 "trsvcid": "4420" 00:16:12.178 }, 00:16:12.178 "peer_address": { 00:16:12.178 "trtype": "TCP", 00:16:12.178 "adrfam": "IPv4", 00:16:12.179 "traddr": "10.0.0.1", 00:16:12.179 "trsvcid": "34392" 00:16:12.179 }, 00:16:12.179 "auth": { 00:16:12.179 "state": "completed", 00:16:12.179 "digest": "sha512", 00:16:12.179 "dhgroup": "ffdhe6144" 00:16:12.179 } 00:16:12.179 } 00:16:12.179 ]' 00:16:12.179 11:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:12.179 11:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:12.179 11:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:12.179 11:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:12.179 11:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:12.435 11:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.435 11:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.435 11:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.692 11:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTNmZDU5ODg4NTA5N2Y2ZDE2N2JmMzQyNzM4ZjYwN2IsnuM6: --dhchap-ctrl-secret DHHC-1:02:YjQ3YjkwYzNkZTcyNzRjNjNhMDMzNWZjZjI0MmU5ZTI2MGE0ZTk0YWU2MzlmNzBisRFJmg==: 00:16:13.624 11:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.624 11:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:13.624 11:20:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.624 11:20:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.624 11:20:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.624 11:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:13.624 11:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:13.624 11:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:13.881 11:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:16:13.881 11:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:13.881 11:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:13.881 11:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:13.881 11:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:13.881 11:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.881 11:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.881 11:20:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.881 11:20:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.881 11:20:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.881 11:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.881 11:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.444 00:16:14.444 11:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:14.444 11:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:14.444 11:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.702 11:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.702 11:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.702 11:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.702 11:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.702 11:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.702 11:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:14.702 { 00:16:14.702 "cntlid": 133, 00:16:14.702 "qid": 0, 00:16:14.702 "state": "enabled", 00:16:14.702 "thread": "nvmf_tgt_poll_group_000", 00:16:14.702 "listen_address": { 00:16:14.702 "trtype": "TCP", 00:16:14.702 "adrfam": "IPv4", 00:16:14.702 "traddr": "10.0.0.2", 00:16:14.702 "trsvcid": "4420" 00:16:14.702 }, 00:16:14.702 "peer_address": { 00:16:14.702 "trtype": "TCP", 00:16:14.702 "adrfam": "IPv4", 00:16:14.702 "traddr": "10.0.0.1", 00:16:14.702 "trsvcid": "34404" 00:16:14.702 }, 00:16:14.702 "auth": { 00:16:14.702 "state": "completed", 00:16:14.702 "digest": "sha512", 00:16:14.702 "dhgroup": "ffdhe6144" 00:16:14.702 } 00:16:14.702 } 00:16:14.702 ]' 00:16:14.702 11:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:14.702 11:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:14.702 11:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:14.702 11:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:14.702 11:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:14.702 11:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.702 11:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.702 11:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.959 11:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2YyY2Q1ZTVkM2U4Yzk4ZGMyYzJhMzIyNTIxYWRlZDk2NzJkZmY1ZjVmY2U0OGNjaji5Jw==: --dhchap-ctrl-secret DHHC-1:01:ZTNjYWVmZDg1OTdhN2FlYjNjZmE2ZmExY2UyMTY3OGWGvBGn: 00:16:15.890 11:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.890 11:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:15.890 11:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.890 11:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.890 11:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.890 11:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:15.890 11:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:15.890 11:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:16.147 11:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:16:16.147 11:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:16.147 11:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:16.147 11:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:16.147 11:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:16.147 11:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.147 11:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:16.147 11:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.147 11:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.147 11:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.147 11:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:16.147 11:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:16.713 00:16:16.713 11:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:16.713 11:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:16.713 11:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.972 11:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.972 11:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.972 11:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.972 11:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.972 11:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.972 11:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:16.972 { 00:16:16.972 "cntlid": 135, 00:16:16.972 "qid": 0, 00:16:16.972 "state": "enabled", 00:16:16.972 "thread": "nvmf_tgt_poll_group_000", 00:16:16.972 "listen_address": { 00:16:16.972 "trtype": "TCP", 00:16:16.972 "adrfam": "IPv4", 00:16:16.972 "traddr": "10.0.0.2", 00:16:16.972 "trsvcid": "4420" 00:16:16.972 }, 00:16:16.972 "peer_address": { 00:16:16.972 "trtype": "TCP", 00:16:16.972 "adrfam": "IPv4", 00:16:16.972 "traddr": "10.0.0.1", 00:16:16.972 "trsvcid": "34428" 00:16:16.972 }, 00:16:16.972 "auth": { 00:16:16.972 "state": "completed", 00:16:16.972 "digest": "sha512", 00:16:16.972 "dhgroup": "ffdhe6144" 00:16:16.972 } 00:16:16.972 } 00:16:16.972 ]' 00:16:16.972 11:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:16.972 11:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:16.972 11:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:16.972 11:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:16.972 11:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:16.972 11:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.972 11:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.972 11:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.288 11:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmU5Zjk2YjVhZDRjNzhkYjJiY2UyMDkzOTEwZjNlMWVjNTVlMTk5NWJhMDU4NTE4M2I5ODEwNjI3NTRiYjMxNR7UyM4=: 00:16:18.259 11:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.259 11:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:18.259 11:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.259 11:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.259 11:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.259 11:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:18.259 11:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:18.259 11:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:18.259 11:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:18.517 11:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:16:18.517 11:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:18.517 11:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:18.517 11:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:18.517 11:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:18.517 11:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.517 11:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.517 11:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.517 11:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.517 11:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.517 11:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.518 11:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.451 00:16:19.451 11:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:19.451 11:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:19.451 11:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.451 11:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.451 11:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.451 11:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.451 11:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.709 11:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.709 11:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:19.709 { 00:16:19.709 "cntlid": 137, 00:16:19.709 "qid": 0, 00:16:19.709 "state": "enabled", 00:16:19.709 "thread": "nvmf_tgt_poll_group_000", 00:16:19.709 "listen_address": { 00:16:19.709 "trtype": "TCP", 00:16:19.709 "adrfam": "IPv4", 00:16:19.709 "traddr": "10.0.0.2", 00:16:19.709 "trsvcid": "4420" 00:16:19.709 }, 00:16:19.709 "peer_address": { 00:16:19.709 "trtype": "TCP", 00:16:19.709 "adrfam": "IPv4", 00:16:19.709 "traddr": "10.0.0.1", 00:16:19.709 "trsvcid": "34458" 00:16:19.709 }, 00:16:19.709 "auth": { 00:16:19.709 "state": "completed", 00:16:19.709 "digest": "sha512", 00:16:19.709 "dhgroup": "ffdhe8192" 00:16:19.709 } 00:16:19.709 } 00:16:19.709 ]' 00:16:19.709 11:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:19.709 11:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:19.709 11:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:19.709 11:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:19.709 11:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:19.709 11:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.709 11:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.709 11:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.967 11:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmFhMWNhMjgyODM2YzA3MmVlZWZhNmIwZjMzNTVkOTFjYTk2YjFjMzY4MDdhZjhkPIy24A==: --dhchap-ctrl-secret DHHC-1:03:Yjg4ZWI0NjU5NjVhYWY1NWNhZGEyZjBkMTYxNmY5YTRiMmUyYjNlNzc0NmQ3NzZiNTJjYjExMWVmMjhjYzdkN4xBKq4=: 00:16:20.901 11:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.901 11:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:20.901 11:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.901 11:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.901 11:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.901 11:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:20.901 11:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:20.901 11:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:21.159 11:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:16:21.159 11:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:21.159 11:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:21.159 11:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:21.159 11:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:21.159 11:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.159 11:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.159 11:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.159 11:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.159 11:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.159 11:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.159 11:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.089 00:16:22.089 11:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:22.089 11:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:22.089 11:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.346 11:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.346 11:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.346 11:20:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.346 11:20:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.346 11:20:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.346 11:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:22.346 { 00:16:22.346 "cntlid": 139, 00:16:22.346 "qid": 0, 00:16:22.346 "state": "enabled", 00:16:22.346 "thread": "nvmf_tgt_poll_group_000", 00:16:22.346 "listen_address": { 00:16:22.346 "trtype": "TCP", 00:16:22.346 "adrfam": "IPv4", 00:16:22.346 "traddr": "10.0.0.2", 00:16:22.346 "trsvcid": "4420" 00:16:22.346 }, 00:16:22.346 "peer_address": { 00:16:22.346 "trtype": "TCP", 00:16:22.346 "adrfam": "IPv4", 00:16:22.346 "traddr": "10.0.0.1", 00:16:22.346 "trsvcid": "34480" 00:16:22.346 }, 00:16:22.346 "auth": { 00:16:22.346 "state": "completed", 00:16:22.346 "digest": "sha512", 00:16:22.346 "dhgroup": "ffdhe8192" 00:16:22.346 } 00:16:22.346 } 00:16:22.346 ]' 00:16:22.346 11:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:22.346 11:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:22.346 11:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:22.346 11:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:22.346 11:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:22.346 11:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.346 11:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.346 11:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.602 11:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTNmZDU5ODg4NTA5N2Y2ZDE2N2JmMzQyNzM4ZjYwN2IsnuM6: --dhchap-ctrl-secret DHHC-1:02:YjQ3YjkwYzNkZTcyNzRjNjNhMDMzNWZjZjI0MmU5ZTI2MGE0ZTk0YWU2MzlmNzBisRFJmg==: 00:16:23.531 11:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.531 11:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:23.531 11:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.531 11:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.531 11:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.531 11:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:23.531 11:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:23.531 11:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:23.787 11:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:16:23.787 11:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:23.787 11:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:23.787 11:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:23.787 11:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:23.787 11:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.787 11:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.787 11:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.787 11:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.787 11:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.787 11:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.787 11:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.719 00:16:24.719 11:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:24.719 11:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:24.719 11:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.976 11:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.976 11:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.976 11:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.976 11:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.976 11:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.976 11:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:24.976 { 00:16:24.976 "cntlid": 141, 00:16:24.976 "qid": 0, 00:16:24.976 "state": "enabled", 00:16:24.976 "thread": "nvmf_tgt_poll_group_000", 00:16:24.976 "listen_address": { 00:16:24.976 "trtype": "TCP", 00:16:24.976 "adrfam": "IPv4", 00:16:24.976 "traddr": "10.0.0.2", 00:16:24.977 "trsvcid": "4420" 00:16:24.977 }, 00:16:24.977 "peer_address": { 00:16:24.977 "trtype": "TCP", 00:16:24.977 "adrfam": "IPv4", 00:16:24.977 "traddr": "10.0.0.1", 00:16:24.977 "trsvcid": "55728" 00:16:24.977 }, 00:16:24.977 "auth": { 00:16:24.977 "state": "completed", 00:16:24.977 "digest": "sha512", 00:16:24.977 "dhgroup": "ffdhe8192" 00:16:24.977 } 00:16:24.977 } 00:16:24.977 ]' 00:16:24.977 11:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:24.977 11:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:24.977 11:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:24.977 11:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:24.977 11:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:24.977 11:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.977 11:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.977 11:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.233 11:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2YyY2Q1ZTVkM2U4Yzk4ZGMyYzJhMzIyNTIxYWRlZDk2NzJkZmY1ZjVmY2U0OGNjaji5Jw==: --dhchap-ctrl-secret DHHC-1:01:ZTNjYWVmZDg1OTdhN2FlYjNjZmE2ZmExY2UyMTY3OGWGvBGn: 00:16:26.165 11:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.165 11:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:26.165 11:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.165 11:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.165 11:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.165 11:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:26.165 11:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:26.165 11:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:26.422 11:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:16:26.422 11:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:26.422 11:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:26.422 11:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:26.422 11:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:26.422 11:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.422 11:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:26.422 11:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.422 11:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.422 11:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.423 11:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:26.423 11:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:27.355 00:16:27.355 11:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:27.355 11:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:27.355 11:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.355 11:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.613 11:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.613 11:20:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.613 11:20:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.613 11:20:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.613 11:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:27.613 { 00:16:27.613 "cntlid": 143, 00:16:27.613 "qid": 0, 00:16:27.613 "state": "enabled", 00:16:27.613 "thread": "nvmf_tgt_poll_group_000", 00:16:27.613 "listen_address": { 00:16:27.613 "trtype": "TCP", 00:16:27.613 "adrfam": "IPv4", 00:16:27.613 "traddr": "10.0.0.2", 00:16:27.613 "trsvcid": "4420" 00:16:27.613 }, 00:16:27.613 "peer_address": { 00:16:27.613 "trtype": "TCP", 00:16:27.613 "adrfam": "IPv4", 00:16:27.613 "traddr": "10.0.0.1", 00:16:27.613 "trsvcid": "55752" 00:16:27.613 }, 00:16:27.613 "auth": { 00:16:27.613 "state": "completed", 00:16:27.613 "digest": "sha512", 00:16:27.613 "dhgroup": "ffdhe8192" 00:16:27.613 } 00:16:27.613 } 00:16:27.613 ]' 00:16:27.613 11:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:27.613 11:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:27.613 11:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:27.613 11:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:27.613 11:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:27.613 11:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.613 11:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.613 11:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.871 11:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmU5Zjk2YjVhZDRjNzhkYjJiY2UyMDkzOTEwZjNlMWVjNTVlMTk5NWJhMDU4NTE4M2I5ODEwNjI3NTRiYjMxNR7UyM4=: 00:16:28.803 11:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.803 11:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:28.803 11:20:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.803 11:20:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.803 11:20:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.803 11:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:28.803 11:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:16:28.803 11:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:28.803 11:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:28.803 11:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:28.803 11:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:29.061 11:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:16:29.061 11:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:29.061 11:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:29.061 11:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:29.061 11:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:29.061 11:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.061 11:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.061 11:20:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.061 11:20:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.061 11:20:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.061 11:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.061 11:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.993 00:16:29.993 11:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:29.993 11:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:29.993 11:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.993 11:20:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.993 11:20:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.993 11:20:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.993 11:20:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.993 11:20:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.993 11:20:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:29.993 { 00:16:29.993 "cntlid": 145, 00:16:29.993 "qid": 0, 00:16:29.993 "state": "enabled", 00:16:29.993 "thread": "nvmf_tgt_poll_group_000", 00:16:29.993 "listen_address": { 00:16:29.993 "trtype": "TCP", 00:16:29.993 "adrfam": "IPv4", 00:16:29.993 "traddr": "10.0.0.2", 00:16:29.993 "trsvcid": "4420" 00:16:29.993 }, 00:16:29.993 "peer_address": { 00:16:29.993 "trtype": "TCP", 00:16:29.993 "adrfam": "IPv4", 00:16:29.993 "traddr": "10.0.0.1", 00:16:29.993 "trsvcid": "55768" 00:16:29.993 }, 00:16:29.993 "auth": { 00:16:29.993 "state": "completed", 00:16:29.993 "digest": "sha512", 00:16:29.993 "dhgroup": "ffdhe8192" 00:16:29.993 } 00:16:29.993 } 00:16:29.993 ]' 00:16:29.993 11:20:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:30.251 11:20:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:30.251 11:20:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:30.251 11:20:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:30.251 11:20:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:30.251 11:20:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.251 11:20:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.251 11:20:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.508 11:20:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YmFhMWNhMjgyODM2YzA3MmVlZWZhNmIwZjMzNTVkOTFjYTk2YjFjMzY4MDdhZjhkPIy24A==: --dhchap-ctrl-secret DHHC-1:03:Yjg4ZWI0NjU5NjVhYWY1NWNhZGEyZjBkMTYxNmY5YTRiMmUyYjNlNzc0NmQ3NzZiNTJjYjExMWVmMjhjYzdkN4xBKq4=: 00:16:31.440 11:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.440 11:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:31.440 11:20:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.440 11:20:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.440 11:20:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.440 11:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:16:31.440 11:20:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.440 11:20:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.440 11:20:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.440 11:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:31.440 11:20:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:31.440 11:20:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:31.440 11:20:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:31.440 11:20:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:31.440 11:20:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:31.440 11:20:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:31.440 11:20:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:31.440 11:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:32.372 request: 00:16:32.372 { 00:16:32.372 "name": "nvme0", 00:16:32.372 "trtype": "tcp", 00:16:32.372 "traddr": "10.0.0.2", 00:16:32.372 "adrfam": "ipv4", 00:16:32.372 "trsvcid": "4420", 00:16:32.372 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:32.372 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:32.372 "prchk_reftag": false, 00:16:32.372 "prchk_guard": false, 00:16:32.372 "hdgst": false, 00:16:32.372 "ddgst": false, 00:16:32.372 "dhchap_key": "key2", 00:16:32.372 "method": "bdev_nvme_attach_controller", 00:16:32.372 "req_id": 1 00:16:32.372 } 00:16:32.372 Got JSON-RPC error response 00:16:32.372 response: 00:16:32.372 { 00:16:32.372 "code": -5, 00:16:32.372 "message": "Input/output error" 00:16:32.372 } 00:16:32.372 11:20:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:32.372 11:20:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:32.372 11:20:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:32.372 11:20:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:32.372 11:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:32.372 11:20:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.372 11:20:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.372 11:20:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.372 11:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.372 11:20:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.372 11:20:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.372 11:20:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.372 11:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:32.372 11:20:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:32.372 11:20:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:32.372 11:20:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:32.372 11:20:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:32.372 11:20:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:32.372 11:20:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:32.373 11:20:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:32.373 11:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:32.937 request: 00:16:32.937 { 00:16:32.937 "name": "nvme0", 00:16:32.937 "trtype": "tcp", 00:16:32.937 "traddr": "10.0.0.2", 00:16:32.937 "adrfam": "ipv4", 00:16:32.937 "trsvcid": "4420", 00:16:32.937 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:32.937 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:32.937 "prchk_reftag": false, 00:16:32.937 "prchk_guard": false, 00:16:32.937 "hdgst": false, 00:16:32.937 "ddgst": false, 00:16:32.937 "dhchap_key": "key1", 00:16:32.937 "dhchap_ctrlr_key": "ckey2", 00:16:32.937 "method": "bdev_nvme_attach_controller", 00:16:32.937 "req_id": 1 00:16:32.937 } 00:16:32.937 Got JSON-RPC error response 00:16:32.937 response: 00:16:32.937 { 00:16:32.937 "code": -5, 00:16:32.937 "message": "Input/output error" 00:16:32.937 } 00:16:32.937 11:20:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:32.938 11:20:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:32.938 11:20:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:32.938 11:20:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:32.938 11:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:32.938 11:20:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.938 11:20:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.938 11:20:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.938 11:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:16:32.938 11:20:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.938 11:20:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.938 11:20:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.938 11:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.938 11:20:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:32.938 11:20:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.938 11:20:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:32.938 11:20:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:32.938 11:20:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:32.938 11:20:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:32.938 11:20:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.938 11:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.871 request: 00:16:33.871 { 00:16:33.871 "name": "nvme0", 00:16:33.871 "trtype": "tcp", 00:16:33.871 "traddr": "10.0.0.2", 00:16:33.871 "adrfam": "ipv4", 00:16:33.871 "trsvcid": "4420", 00:16:33.871 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:33.871 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:33.871 "prchk_reftag": false, 00:16:33.871 "prchk_guard": false, 00:16:33.871 "hdgst": false, 00:16:33.871 "ddgst": false, 00:16:33.871 "dhchap_key": "key1", 00:16:33.871 "dhchap_ctrlr_key": "ckey1", 00:16:33.871 "method": "bdev_nvme_attach_controller", 00:16:33.871 "req_id": 1 00:16:33.871 } 00:16:33.871 Got JSON-RPC error response 00:16:33.871 response: 00:16:33.871 { 00:16:33.871 "code": -5, 00:16:33.871 "message": "Input/output error" 00:16:33.871 } 00:16:33.871 11:20:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:33.871 11:20:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:33.871 11:20:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:33.871 11:20:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:33.871 11:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:33.871 11:20:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.871 11:20:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.871 11:20:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.871 11:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 564908 00:16:33.871 11:20:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 564908 ']' 00:16:33.871 11:20:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 564908 00:16:33.871 11:20:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:16:33.871 11:20:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:33.871 11:20:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 564908 00:16:33.871 11:20:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:33.871 11:20:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:33.871 11:20:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 564908' 00:16:33.871 killing process with pid 564908 00:16:33.871 11:20:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 564908 00:16:33.871 11:20:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 564908 00:16:34.129 11:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:16:34.129 11:21:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:34.129 11:21:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:34.129 11:21:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.129 11:21:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=586158 00:16:34.129 11:21:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:16:34.129 11:21:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 586158 00:16:34.129 11:21:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 586158 ']' 00:16:34.129 11:21:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.129 11:21:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:34.129 11:21:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.129 11:21:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:34.129 11:21:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.387 11:21:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:34.387 11:21:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:34.387 11:21:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:34.387 11:21:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:34.387 11:21:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.387 11:21:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:34.387 11:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:34.387 11:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 586158 00:16:34.387 11:21:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 586158 ']' 00:16:34.387 11:21:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.387 11:21:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:34.387 11:21:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.387 11:21:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:34.387 11:21:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.645 11:21:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:34.645 11:21:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:34.645 11:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:16:34.645 11:21:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.645 11:21:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.645 11:21:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.645 11:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:16:34.645 11:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:34.645 11:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:34.645 11:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:34.645 11:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:34.645 11:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.645 11:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:34.645 11:21:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.645 11:21:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.645 11:21:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.645 11:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:34.645 11:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:35.576 00:16:35.576 11:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:35.576 11:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.576 11:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:35.834 11:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.834 11:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.834 11:21:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.834 11:21:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.834 11:21:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.834 11:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:35.834 { 00:16:35.834 "cntlid": 1, 00:16:35.834 "qid": 0, 00:16:35.834 "state": "enabled", 00:16:35.834 "thread": "nvmf_tgt_poll_group_000", 00:16:35.834 "listen_address": { 00:16:35.834 "trtype": "TCP", 00:16:35.834 "adrfam": "IPv4", 00:16:35.834 "traddr": "10.0.0.2", 00:16:35.834 "trsvcid": "4420" 00:16:35.834 }, 00:16:35.834 "peer_address": { 00:16:35.834 "trtype": "TCP", 00:16:35.834 "adrfam": "IPv4", 00:16:35.834 "traddr": "10.0.0.1", 00:16:35.834 "trsvcid": "51002" 00:16:35.834 }, 00:16:35.834 "auth": { 00:16:35.834 "state": "completed", 00:16:35.834 "digest": "sha512", 00:16:35.834 "dhgroup": "ffdhe8192" 00:16:35.834 } 00:16:35.834 } 00:16:35.834 ]' 00:16:35.834 11:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:35.834 11:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:35.834 11:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:35.834 11:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:35.834 11:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:36.091 11:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.091 11:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.091 11:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.349 11:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MmU5Zjk2YjVhZDRjNzhkYjJiY2UyMDkzOTEwZjNlMWVjNTVlMTk5NWJhMDU4NTE4M2I5ODEwNjI3NTRiYjMxNR7UyM4=: 00:16:37.282 11:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.282 11:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:37.282 11:21:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.282 11:21:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.282 11:21:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.282 11:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:37.282 11:21:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.282 11:21:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.282 11:21:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.283 11:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:16:37.283 11:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:16:37.283 11:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:37.283 11:21:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:37.283 11:21:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:37.283 11:21:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:37.283 11:21:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:37.283 11:21:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:37.283 11:21:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:37.283 11:21:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:37.283 11:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:37.540 request: 00:16:37.540 { 00:16:37.540 "name": "nvme0", 00:16:37.540 "trtype": "tcp", 00:16:37.540 "traddr": "10.0.0.2", 00:16:37.540 "adrfam": "ipv4", 00:16:37.540 "trsvcid": "4420", 00:16:37.540 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:37.540 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:37.540 "prchk_reftag": false, 00:16:37.540 "prchk_guard": false, 00:16:37.540 "hdgst": false, 00:16:37.540 "ddgst": false, 00:16:37.540 "dhchap_key": "key3", 00:16:37.540 "method": "bdev_nvme_attach_controller", 00:16:37.540 "req_id": 1 00:16:37.540 } 00:16:37.540 Got JSON-RPC error response 00:16:37.540 response: 00:16:37.540 { 00:16:37.540 "code": -5, 00:16:37.540 "message": "Input/output error" 00:16:37.540 } 00:16:37.540 11:21:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:37.540 11:21:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:37.540 11:21:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:37.540 11:21:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:37.540 11:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:16:37.540 11:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:16:37.540 11:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:37.540 11:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:37.820 11:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:37.820 11:21:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:37.820 11:21:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:37.820 11:21:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:37.820 11:21:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:37.820 11:21:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:37.820 11:21:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:37.820 11:21:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:37.820 11:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:38.077 request: 00:16:38.077 { 00:16:38.077 "name": "nvme0", 00:16:38.077 "trtype": "tcp", 00:16:38.077 "traddr": "10.0.0.2", 00:16:38.077 "adrfam": "ipv4", 00:16:38.077 "trsvcid": "4420", 00:16:38.077 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:38.077 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:38.077 "prchk_reftag": false, 00:16:38.077 "prchk_guard": false, 00:16:38.077 "hdgst": false, 00:16:38.077 "ddgst": false, 00:16:38.077 "dhchap_key": "key3", 00:16:38.077 "method": "bdev_nvme_attach_controller", 00:16:38.077 "req_id": 1 00:16:38.077 } 00:16:38.077 Got JSON-RPC error response 00:16:38.077 response: 00:16:38.077 { 00:16:38.077 "code": -5, 00:16:38.077 "message": "Input/output error" 00:16:38.077 } 00:16:38.077 11:21:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:38.077 11:21:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:38.077 11:21:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:38.077 11:21:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:38.077 11:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:16:38.077 11:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:16:38.077 11:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:16:38.077 11:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:38.077 11:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:38.077 11:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:38.333 11:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:38.333 11:21:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.333 11:21:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.333 11:21:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.333 11:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:38.333 11:21:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.333 11:21:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.333 11:21:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.333 11:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:38.333 11:21:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:38.333 11:21:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:38.333 11:21:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:38.333 11:21:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:38.333 11:21:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:38.333 11:21:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:38.333 11:21:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:38.333 11:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:38.589 request: 00:16:38.589 { 00:16:38.589 "name": "nvme0", 00:16:38.589 "trtype": "tcp", 00:16:38.589 "traddr": "10.0.0.2", 00:16:38.589 "adrfam": "ipv4", 00:16:38.589 "trsvcid": "4420", 00:16:38.589 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:38.589 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:38.589 "prchk_reftag": false, 00:16:38.589 "prchk_guard": false, 00:16:38.589 "hdgst": false, 00:16:38.589 "ddgst": false, 00:16:38.589 "dhchap_key": "key0", 00:16:38.589 "dhchap_ctrlr_key": "key1", 00:16:38.589 "method": "bdev_nvme_attach_controller", 00:16:38.589 "req_id": 1 00:16:38.589 } 00:16:38.589 Got JSON-RPC error response 00:16:38.589 response: 00:16:38.589 { 00:16:38.589 "code": -5, 00:16:38.589 "message": "Input/output error" 00:16:38.589 } 00:16:38.589 11:21:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:38.589 11:21:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:38.589 11:21:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:38.589 11:21:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:38.589 11:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:38.589 11:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:39.153 00:16:39.153 11:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:16:39.153 11:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:16:39.153 11:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.153 11:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.153 11:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.153 11:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.487 11:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:16:39.487 11:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:16:39.487 11:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 564968 00:16:39.487 11:21:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 564968 ']' 00:16:39.487 11:21:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 564968 00:16:39.487 11:21:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:16:39.487 11:21:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:39.487 11:21:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 564968 00:16:39.487 11:21:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:39.487 11:21:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:39.487 11:21:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 564968' 00:16:39.487 killing process with pid 564968 00:16:39.487 11:21:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 564968 00:16:39.487 11:21:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 564968 00:16:40.055 11:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:16:40.055 11:21:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:40.055 11:21:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:16:40.055 11:21:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:40.055 11:21:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:16:40.055 11:21:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:40.055 11:21:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:40.055 rmmod nvme_tcp 00:16:40.055 rmmod nvme_fabrics 00:16:40.055 rmmod nvme_keyring 00:16:40.055 11:21:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:40.055 11:21:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:16:40.055 11:21:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:16:40.055 11:21:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 586158 ']' 00:16:40.055 11:21:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 586158 00:16:40.055 11:21:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 586158 ']' 00:16:40.055 11:21:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 586158 00:16:40.055 11:21:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:16:40.055 11:21:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:40.055 11:21:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 586158 00:16:40.055 11:21:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:40.055 11:21:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:40.055 11:21:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 586158' 00:16:40.055 killing process with pid 586158 00:16:40.055 11:21:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 586158 00:16:40.055 11:21:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 586158 00:16:40.312 11:21:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:40.312 11:21:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:40.312 11:21:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:40.312 11:21:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:40.312 11:21:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:40.312 11:21:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.312 11:21:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:40.312 11:21:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.841 11:21:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:42.841 11:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.BCm /tmp/spdk.key-sha256.Kwg /tmp/spdk.key-sha384.0uJ /tmp/spdk.key-sha512.Arm /tmp/spdk.key-sha512.KEt /tmp/spdk.key-sha384.9ik /tmp/spdk.key-sha256.D6N '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:16:42.841 00:16:42.841 real 3m1.797s 00:16:42.841 user 7m5.197s 00:16:42.841 sys 0m25.126s 00:16:42.841 11:21:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:42.841 11:21:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.841 ************************************ 00:16:42.841 END TEST nvmf_auth_target 00:16:42.841 ************************************ 00:16:42.841 11:21:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:42.841 11:21:08 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:16:42.841 11:21:08 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:42.841 11:21:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:16:42.841 11:21:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:42.841 11:21:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:42.841 ************************************ 00:16:42.841 START TEST nvmf_bdevio_no_huge 00:16:42.841 ************************************ 00:16:42.841 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:42.841 * Looking for test storage... 00:16:42.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:42.841 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:42.841 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:16:42.841 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:42.841 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:42.841 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:42.841 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:42.841 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:42.841 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:42.841 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:42.841 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:42.841 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:42.841 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:42.841 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:42.841 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:42.841 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:42.841 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:42.841 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:42.841 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:42.841 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:42.841 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:42.841 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:42.841 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:42.842 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.842 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.842 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.842 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:16:42.842 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.842 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:16:42.842 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:42.842 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:42.842 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:42.842 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:42.842 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:42.842 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:42.842 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:42.842 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:42.842 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:42.842 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:42.842 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:16:42.842 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:42.842 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:42.842 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:42.842 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:42.842 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:42.842 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.842 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:42.842 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.842 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:42.842 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:42.842 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:16:42.842 11:21:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:44.763 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:44.763 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:44.763 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:44.763 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:44.764 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:44.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:44.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:16:44.764 00:16:44.764 --- 10.0.0.2 ping statistics --- 00:16:44.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.764 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:44.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:44.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:16:44.764 00:16:44.764 --- 10.0.0.1 ping statistics --- 00:16:44.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.764 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=589494 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 589494 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 589494 ']' 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:44.764 11:21:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:44.764 [2024-07-12 11:21:10.800438] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:16:44.764 [2024-07-12 11:21:10.800534] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:44.764 [2024-07-12 11:21:10.869603] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:45.022 [2024-07-12 11:21:10.968742] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:45.022 [2024-07-12 11:21:10.968799] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:45.022 [2024-07-12 11:21:10.968827] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:45.022 [2024-07-12 11:21:10.968837] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:45.022 [2024-07-12 11:21:10.968847] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:45.022 [2024-07-12 11:21:10.968988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:45.022 [2024-07-12 11:21:10.969054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:16:45.022 [2024-07-12 11:21:10.969105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:16:45.022 [2024-07-12 11:21:10.969108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:45.952 11:21:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:45.952 11:21:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:16:45.952 11:21:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:45.952 11:21:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:45.952 11:21:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:45.952 11:21:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:45.952 11:21:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:45.952 11:21:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.952 11:21:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:45.952 [2024-07-12 11:21:11.757333] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:45.952 11:21:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.952 11:21:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:45.952 11:21:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.952 11:21:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:45.952 Malloc0 00:16:45.952 11:21:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.952 11:21:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:45.952 11:21:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.952 11:21:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:45.952 11:21:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.952 11:21:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:45.952 11:21:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.952 11:21:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:45.952 11:21:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.953 11:21:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:45.953 11:21:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.953 11:21:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:45.953 [2024-07-12 11:21:11.795482] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:45.953 11:21:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.953 11:21:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:45.953 11:21:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:45.953 11:21:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:16:45.953 11:21:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:16:45.953 11:21:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:45.953 11:21:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:45.953 { 00:16:45.953 "params": { 00:16:45.953 "name": "Nvme$subsystem", 00:16:45.953 "trtype": "$TEST_TRANSPORT", 00:16:45.953 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:45.953 "adrfam": "ipv4", 00:16:45.953 "trsvcid": "$NVMF_PORT", 00:16:45.953 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:45.953 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:45.953 "hdgst": ${hdgst:-false}, 00:16:45.953 "ddgst": ${ddgst:-false} 00:16:45.953 }, 00:16:45.953 "method": "bdev_nvme_attach_controller" 00:16:45.953 } 00:16:45.953 EOF 00:16:45.953 )") 00:16:45.953 11:21:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:16:45.953 11:21:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:16:45.953 11:21:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:16:45.953 11:21:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:45.953 "params": { 00:16:45.953 "name": "Nvme1", 00:16:45.953 "trtype": "tcp", 00:16:45.953 "traddr": "10.0.0.2", 00:16:45.953 "adrfam": "ipv4", 00:16:45.953 "trsvcid": "4420", 00:16:45.953 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:45.953 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:45.953 "hdgst": false, 00:16:45.953 "ddgst": false 00:16:45.953 }, 00:16:45.953 "method": "bdev_nvme_attach_controller" 00:16:45.953 }' 00:16:45.953 [2024-07-12 11:21:11.842175] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:16:45.953 [2024-07-12 11:21:11.842245] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid589651 ] 00:16:45.953 [2024-07-12 11:21:11.905663] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:45.953 [2024-07-12 11:21:12.021534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:45.953 [2024-07-12 11:21:12.021583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:45.953 [2024-07-12 11:21:12.021586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.210 I/O targets: 00:16:46.210 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:46.210 00:16:46.210 00:16:46.210 CUnit - A unit testing framework for C - Version 2.1-3 00:16:46.210 http://cunit.sourceforge.net/ 00:16:46.210 00:16:46.210 00:16:46.210 Suite: bdevio tests on: Nvme1n1 00:16:46.210 Test: blockdev write read block ...passed 00:16:46.210 Test: blockdev write zeroes read block ...passed 00:16:46.210 Test: blockdev write zeroes read no split ...passed 00:16:46.210 Test: blockdev write zeroes read split ...passed 00:16:46.210 Test: blockdev write zeroes read split partial ...passed 00:16:46.210 Test: blockdev reset ...[2024-07-12 11:21:12.300658] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:46.210 [2024-07-12 11:21:12.300773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1244fb0 (9): Bad file descriptor 00:16:46.210 [2024-07-12 11:21:12.319452] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:46.210 passed 00:16:46.210 Test: blockdev write read 8 blocks ...passed 00:16:46.210 Test: blockdev write read size > 128k ...passed 00:16:46.210 Test: blockdev write read invalid size ...passed 00:16:46.467 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:46.467 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:46.467 Test: blockdev write read max offset ...passed 00:16:46.467 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:46.467 Test: blockdev writev readv 8 blocks ...passed 00:16:46.467 Test: blockdev writev readv 30 x 1block ...passed 00:16:46.467 Test: blockdev writev readv block ...passed 00:16:46.467 Test: blockdev writev readv size > 128k ...passed 00:16:46.467 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:46.467 Test: blockdev comparev and writev ...[2024-07-12 11:21:12.494030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:46.467 [2024-07-12 11:21:12.494064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.467 [2024-07-12 11:21:12.494088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:46.467 [2024-07-12 11:21:12.494106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.467 [2024-07-12 11:21:12.494435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:46.467 [2024-07-12 11:21:12.494460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:46.467 [2024-07-12 11:21:12.494483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:46.467 [2024-07-12 11:21:12.494499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:46.467 [2024-07-12 11:21:12.494819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:46.467 [2024-07-12 11:21:12.494843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:46.467 [2024-07-12 11:21:12.494873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:46.467 [2024-07-12 11:21:12.494897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:46.467 [2024-07-12 11:21:12.495221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:46.467 [2024-07-12 11:21:12.495245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:46.467 [2024-07-12 11:21:12.495267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:46.467 [2024-07-12 11:21:12.495283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:46.467 passed 00:16:46.467 Test: blockdev nvme passthru rw ...passed 00:16:46.467 Test: blockdev nvme passthru vendor specific ...[2024-07-12 11:21:12.577128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:46.467 [2024-07-12 11:21:12.577155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:46.467 [2024-07-12 11:21:12.577295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:46.467 [2024-07-12 11:21:12.577318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:46.467 [2024-07-12 11:21:12.577458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:46.467 [2024-07-12 11:21:12.577481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:46.467 [2024-07-12 11:21:12.577621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:46.467 [2024-07-12 11:21:12.577645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:46.467 passed 00:16:46.467 Test: blockdev nvme admin passthru ...passed 00:16:46.725 Test: blockdev copy ...passed 00:16:46.725 00:16:46.725 Run Summary: Type Total Ran Passed Failed Inactive 00:16:46.725 suites 1 1 n/a 0 0 00:16:46.725 tests 23 23 23 0 0 00:16:46.725 asserts 152 152 152 0 n/a 00:16:46.725 00:16:46.725 Elapsed time = 0.901 seconds 00:16:46.983 11:21:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:46.983 11:21:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.983 11:21:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:46.983 11:21:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.983 11:21:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:46.983 11:21:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:16:46.983 11:21:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:46.983 11:21:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:16:46.983 11:21:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:46.983 11:21:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:16:46.983 11:21:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:46.983 11:21:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:46.983 rmmod nvme_tcp 00:16:46.983 rmmod nvme_fabrics 00:16:46.983 rmmod nvme_keyring 00:16:46.983 11:21:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:46.983 11:21:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:16:46.983 11:21:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:16:46.983 11:21:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 589494 ']' 00:16:46.983 11:21:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 589494 00:16:46.983 11:21:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 589494 ']' 00:16:46.983 11:21:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 589494 00:16:46.984 11:21:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:16:46.984 11:21:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:46.984 11:21:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 589494 00:16:46.984 11:21:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:16:46.984 11:21:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:16:46.984 11:21:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 589494' 00:16:46.984 killing process with pid 589494 00:16:46.984 11:21:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 589494 00:16:46.984 11:21:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 589494 00:16:47.551 11:21:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:47.551 11:21:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:47.551 11:21:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:47.551 11:21:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:47.551 11:21:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:47.551 11:21:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.551 11:21:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:47.551 11:21:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.453 11:21:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:49.453 00:16:49.453 real 0m7.082s 00:16:49.453 user 0m12.377s 00:16:49.453 sys 0m2.563s 00:16:49.453 11:21:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:49.453 11:21:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:49.453 ************************************ 00:16:49.453 END TEST nvmf_bdevio_no_huge 00:16:49.453 ************************************ 00:16:49.453 11:21:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:49.453 11:21:15 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:49.453 11:21:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:49.453 11:21:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:49.453 11:21:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:49.453 ************************************ 00:16:49.453 START TEST nvmf_tls 00:16:49.453 ************************************ 00:16:49.453 11:21:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:49.712 * Looking for test storage... 00:16:49.712 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:16:49.712 11:21:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:51.608 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:51.608 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:51.608 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.608 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:51.608 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:51.609 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.609 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:51.609 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:16:51.609 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:51.609 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:51.609 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:51.609 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:51.609 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:51.609 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:51.609 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:51.609 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:51.609 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:51.609 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:51.609 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:51.609 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:51.609 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:51.609 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:51.609 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:51.609 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:51.866 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:51.866 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:51.866 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:51.866 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:51.866 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:51.866 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:51.866 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:51.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:51.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:16:51.866 00:16:51.866 --- 10.0.0.2 ping statistics --- 00:16:51.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.866 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:16:51.866 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:51.866 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:51.866 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:16:51.866 00:16:51.866 --- 10.0.0.1 ping statistics --- 00:16:51.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.866 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:16:51.866 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:51.866 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:16:51.866 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:51.866 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:51.866 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:51.866 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:51.866 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:51.866 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:51.866 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:51.866 11:21:17 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:51.866 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:51.866 11:21:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:51.866 11:21:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:51.866 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=591718 00:16:51.866 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:51.866 11:21:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 591718 00:16:51.866 11:21:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 591718 ']' 00:16:51.866 11:21:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.866 11:21:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:51.866 11:21:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.866 11:21:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:51.866 11:21:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:51.866 [2024-07-12 11:21:17.939012] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:16:51.866 [2024-07-12 11:21:17.939102] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.866 EAL: No free 2048 kB hugepages reported on node 1 00:16:52.124 [2024-07-12 11:21:18.016434] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.124 [2024-07-12 11:21:18.137142] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:52.124 [2024-07-12 11:21:18.137208] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:52.124 [2024-07-12 11:21:18.137235] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:52.124 [2024-07-12 11:21:18.137251] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:52.124 [2024-07-12 11:21:18.137260] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:52.124 [2024-07-12 11:21:18.137284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.124 11:21:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:52.124 11:21:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:52.124 11:21:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:52.124 11:21:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:52.124 11:21:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:52.124 11:21:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:52.124 11:21:18 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:16:52.124 11:21:18 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:52.381 true 00:16:52.381 11:21:18 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:52.381 11:21:18 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:16:52.637 11:21:18 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:16:52.637 11:21:18 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:16:52.637 11:21:18 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:53.201 11:21:19 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:53.201 11:21:19 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:16:53.201 11:21:19 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:16:53.201 11:21:19 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:16:53.201 11:21:19 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:53.458 11:21:19 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:53.458 11:21:19 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:16:53.715 11:21:19 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:16:53.715 11:21:19 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:16:53.715 11:21:19 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:53.715 11:21:19 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:16:53.973 11:21:20 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:16:53.973 11:21:20 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:16:53.973 11:21:20 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:54.230 11:21:20 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:54.230 11:21:20 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:16:54.488 11:21:20 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:16:54.488 11:21:20 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:16:54.488 11:21:20 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:54.746 11:21:20 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:54.746 11:21:20 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:16:55.004 11:21:21 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:16:55.004 11:21:21 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:16:55.004 11:21:21 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:16:55.004 11:21:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:16:55.004 11:21:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:16:55.004 11:21:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:16:55.004 11:21:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:16:55.004 11:21:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:16:55.004 11:21:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:16:55.004 11:21:21 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:55.004 11:21:21 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:16:55.004 11:21:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:16:55.004 11:21:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:16:55.004 11:21:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:16:55.004 11:21:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:16:55.004 11:21:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:16:55.004 11:21:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:16:55.262 11:21:21 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:55.262 11:21:21 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:16:55.262 11:21:21 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.S25tEOOl2q 00:16:55.262 11:21:21 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:16:55.262 11:21:21 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.B0ypxocy0a 00:16:55.262 11:21:21 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:55.262 11:21:21 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:55.262 11:21:21 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.S25tEOOl2q 00:16:55.262 11:21:21 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.B0ypxocy0a 00:16:55.262 11:21:21 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:55.519 11:21:21 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:16:55.777 11:21:21 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.S25tEOOl2q 00:16:55.777 11:21:21 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.S25tEOOl2q 00:16:55.777 11:21:21 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:56.035 [2024-07-12 11:21:22.055000] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:56.035 11:21:22 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:56.292 11:21:22 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:56.550 [2024-07-12 11:21:22.628522] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:56.550 [2024-07-12 11:21:22.628705] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:56.550 11:21:22 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:56.808 malloc0 00:16:57.074 11:21:22 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:57.074 11:21:23 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.S25tEOOl2q 00:16:57.339 [2024-07-12 11:21:23.409007] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:57.339 11:21:23 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.S25tEOOl2q 00:16:57.339 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.554 Initializing NVMe Controllers 00:17:09.554 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:09.554 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:09.554 Initialization complete. Launching workers. 00:17:09.554 ======================================================== 00:17:09.554 Latency(us) 00:17:09.554 Device Information : IOPS MiB/s Average min max 00:17:09.554 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7839.59 30.62 8166.04 982.29 10681.26 00:17:09.554 ======================================================== 00:17:09.554 Total : 7839.59 30.62 8166.04 982.29 10681.26 00:17:09.554 00:17:09.554 11:21:33 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.S25tEOOl2q 00:17:09.554 11:21:33 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:09.554 11:21:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:09.554 11:21:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:09.554 11:21:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.S25tEOOl2q' 00:17:09.554 11:21:33 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:09.554 11:21:33 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=593624 00:17:09.554 11:21:33 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:09.554 11:21:33 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:09.554 11:21:33 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 593624 /var/tmp/bdevperf.sock 00:17:09.554 11:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 593624 ']' 00:17:09.554 11:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:09.554 11:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:09.554 11:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:09.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:09.554 11:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:09.554 11:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:09.554 [2024-07-12 11:21:33.580245] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:17:09.554 [2024-07-12 11:21:33.580321] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid593624 ] 00:17:09.554 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.554 [2024-07-12 11:21:33.637318] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.554 [2024-07-12 11:21:33.743244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:09.554 11:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:09.554 11:21:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:09.554 11:21:33 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.S25tEOOl2q 00:17:09.554 [2024-07-12 11:21:34.125345] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:09.554 [2024-07-12 11:21:34.125479] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:09.554 TLSTESTn1 00:17:09.554 11:21:34 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:09.554 Running I/O for 10 seconds... 00:17:19.611 00:17:19.611 Latency(us) 00:17:19.611 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.611 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:19.611 Verification LBA range: start 0x0 length 0x2000 00:17:19.611 TLSTESTn1 : 10.02 3492.60 13.64 0.00 0.00 36580.04 9757.58 33981.63 00:17:19.611 =================================================================================================================== 00:17:19.611 Total : 3492.60 13.64 0.00 0.00 36580.04 9757.58 33981.63 00:17:19.611 0 00:17:19.611 11:21:44 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:19.611 11:21:44 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 593624 00:17:19.611 11:21:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 593624 ']' 00:17:19.611 11:21:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 593624 00:17:19.611 11:21:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:19.611 11:21:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:19.611 11:21:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 593624 00:17:19.611 11:21:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:19.611 11:21:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:19.611 11:21:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 593624' 00:17:19.611 killing process with pid 593624 00:17:19.611 11:21:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 593624 00:17:19.611 Received shutdown signal, test time was about 10.000000 seconds 00:17:19.611 00:17:19.611 Latency(us) 00:17:19.611 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.611 =================================================================================================================== 00:17:19.611 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:19.611 [2024-07-12 11:21:44.413361] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:19.611 11:21:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 593624 00:17:19.611 11:21:44 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.B0ypxocy0a 00:17:19.611 11:21:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:19.611 11:21:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.B0ypxocy0a 00:17:19.611 11:21:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:19.611 11:21:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:19.611 11:21:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:19.611 11:21:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:19.611 11:21:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.B0ypxocy0a 00:17:19.611 11:21:44 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:19.611 11:21:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:19.611 11:21:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:19.611 11:21:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.B0ypxocy0a' 00:17:19.611 11:21:44 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:19.611 11:21:44 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=594937 00:17:19.611 11:21:44 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:19.611 11:21:44 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:19.611 11:21:44 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 594937 /var/tmp/bdevperf.sock 00:17:19.611 11:21:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 594937 ']' 00:17:19.611 11:21:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:19.611 11:21:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:19.611 11:21:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:19.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:19.611 11:21:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:19.611 11:21:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:19.611 [2024-07-12 11:21:44.712340] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:17:19.611 [2024-07-12 11:21:44.712418] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid594937 ] 00:17:19.611 EAL: No free 2048 kB hugepages reported on node 1 00:17:19.611 [2024-07-12 11:21:44.770254] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.611 [2024-07-12 11:21:44.873758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:19.611 11:21:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:19.611 11:21:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:19.611 11:21:44 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.B0ypxocy0a 00:17:19.611 [2024-07-12 11:21:45.200430] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:19.611 [2024-07-12 11:21:45.200569] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:19.611 [2024-07-12 11:21:45.206091] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:19.611 [2024-07-12 11:21:45.206548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba9f90 (107): Transport endpoint is not connected 00:17:19.611 [2024-07-12 11:21:45.207535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba9f90 (9): Bad file descriptor 00:17:19.611 [2024-07-12 11:21:45.208534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:19.611 [2024-07-12 11:21:45.208555] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:19.611 [2024-07-12 11:21:45.208588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:19.611 request: 00:17:19.611 { 00:17:19.611 "name": "TLSTEST", 00:17:19.611 "trtype": "tcp", 00:17:19.611 "traddr": "10.0.0.2", 00:17:19.611 "adrfam": "ipv4", 00:17:19.611 "trsvcid": "4420", 00:17:19.611 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:19.611 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:19.611 "prchk_reftag": false, 00:17:19.611 "prchk_guard": false, 00:17:19.611 "hdgst": false, 00:17:19.611 "ddgst": false, 00:17:19.611 "psk": "/tmp/tmp.B0ypxocy0a", 00:17:19.611 "method": "bdev_nvme_attach_controller", 00:17:19.611 "req_id": 1 00:17:19.611 } 00:17:19.611 Got JSON-RPC error response 00:17:19.611 response: 00:17:19.611 { 00:17:19.611 "code": -5, 00:17:19.611 "message": "Input/output error" 00:17:19.611 } 00:17:19.611 11:21:45 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 594937 00:17:19.611 11:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 594937 ']' 00:17:19.611 11:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 594937 00:17:19.611 11:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:19.611 11:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:19.611 11:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 594937 00:17:19.611 11:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:19.611 11:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:19.611 11:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 594937' 00:17:19.611 killing process with pid 594937 00:17:19.611 11:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 594937 00:17:19.611 Received shutdown signal, test time was about 10.000000 seconds 00:17:19.611 00:17:19.611 Latency(us) 00:17:19.611 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.611 =================================================================================================================== 00:17:19.611 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:19.611 [2024-07-12 11:21:45.260981] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:19.611 11:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 594937 00:17:19.611 11:21:45 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:19.611 11:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:19.611 11:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:19.611 11:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:19.611 11:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:19.611 11:21:45 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.S25tEOOl2q 00:17:19.611 11:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:19.612 11:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.S25tEOOl2q 00:17:19.612 11:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:19.612 11:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:19.612 11:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:19.612 11:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:19.612 11:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.S25tEOOl2q 00:17:19.612 11:21:45 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:19.612 11:21:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:19.612 11:21:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:19.612 11:21:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.S25tEOOl2q' 00:17:19.612 11:21:45 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:19.612 11:21:45 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=594951 00:17:19.612 11:21:45 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:19.612 11:21:45 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:19.612 11:21:45 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 594951 /var/tmp/bdevperf.sock 00:17:19.612 11:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 594951 ']' 00:17:19.612 11:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:19.612 11:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:19.612 11:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:19.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:19.612 11:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:19.612 11:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:19.612 [2024-07-12 11:21:45.571520] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:17:19.612 [2024-07-12 11:21:45.571597] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid594951 ] 00:17:19.612 EAL: No free 2048 kB hugepages reported on node 1 00:17:19.612 [2024-07-12 11:21:45.633036] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.870 [2024-07-12 11:21:45.748983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:19.870 11:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:19.870 11:21:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:19.870 11:21:45 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.S25tEOOl2q 00:17:20.128 [2024-07-12 11:21:46.118666] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:20.128 [2024-07-12 11:21:46.118791] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:20.128 [2024-07-12 11:21:46.128171] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:20.128 [2024-07-12 11:21:46.128216] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:20.128 [2024-07-12 11:21:46.128270] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:20.128 [2024-07-12 11:21:46.128642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd3cf90 (107): Transport endpoint is not connected 00:17:20.128 [2024-07-12 11:21:46.129632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd3cf90 (9): Bad file descriptor 00:17:20.128 [2024-07-12 11:21:46.130631] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:20.128 [2024-07-12 11:21:46.130651] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:20.128 [2024-07-12 11:21:46.130684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:20.128 request: 00:17:20.128 { 00:17:20.128 "name": "TLSTEST", 00:17:20.128 "trtype": "tcp", 00:17:20.128 "traddr": "10.0.0.2", 00:17:20.128 "adrfam": "ipv4", 00:17:20.128 "trsvcid": "4420", 00:17:20.128 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:20.128 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:20.128 "prchk_reftag": false, 00:17:20.128 "prchk_guard": false, 00:17:20.128 "hdgst": false, 00:17:20.128 "ddgst": false, 00:17:20.128 "psk": "/tmp/tmp.S25tEOOl2q", 00:17:20.128 "method": "bdev_nvme_attach_controller", 00:17:20.128 "req_id": 1 00:17:20.128 } 00:17:20.128 Got JSON-RPC error response 00:17:20.128 response: 00:17:20.128 { 00:17:20.128 "code": -5, 00:17:20.128 "message": "Input/output error" 00:17:20.128 } 00:17:20.128 11:21:46 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 594951 00:17:20.128 11:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 594951 ']' 00:17:20.128 11:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 594951 00:17:20.128 11:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:20.128 11:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:20.128 11:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 594951 00:17:20.128 11:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:20.128 11:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:20.128 11:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 594951' 00:17:20.128 killing process with pid 594951 00:17:20.128 11:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 594951 00:17:20.128 Received shutdown signal, test time was about 10.000000 seconds 00:17:20.128 00:17:20.128 Latency(us) 00:17:20.128 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.128 =================================================================================================================== 00:17:20.128 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:20.128 [2024-07-12 11:21:46.183561] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:20.128 11:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 594951 00:17:20.386 11:21:46 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:20.386 11:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:20.386 11:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:20.386 11:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:20.386 11:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:20.386 11:21:46 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.S25tEOOl2q 00:17:20.386 11:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:20.386 11:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.S25tEOOl2q 00:17:20.386 11:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:20.386 11:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:20.386 11:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:20.386 11:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:20.386 11:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.S25tEOOl2q 00:17:20.386 11:21:46 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:20.386 11:21:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:20.386 11:21:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:20.386 11:21:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.S25tEOOl2q' 00:17:20.386 11:21:46 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:20.386 11:21:46 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=595093 00:17:20.386 11:21:46 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:20.386 11:21:46 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:20.386 11:21:46 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 595093 /var/tmp/bdevperf.sock 00:17:20.386 11:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 595093 ']' 00:17:20.386 11:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:20.386 11:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:20.386 11:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:20.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:20.386 11:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:20.386 11:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:20.386 [2024-07-12 11:21:46.493569] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:17:20.386 [2024-07-12 11:21:46.493654] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid595093 ] 00:17:20.644 EAL: No free 2048 kB hugepages reported on node 1 00:17:20.644 [2024-07-12 11:21:46.551784] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.644 [2024-07-12 11:21:46.656931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.644 11:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:20.644 11:21:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:20.644 11:21:46 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.S25tEOOl2q 00:17:21.209 [2024-07-12 11:21:47.042491] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:21.209 [2024-07-12 11:21:47.042621] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:21.209 [2024-07-12 11:21:47.050582] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:21.209 [2024-07-12 11:21:47.050612] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:21.209 [2024-07-12 11:21:47.050651] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:21.209 [2024-07-12 11:21:47.051500] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a8f90 (107): Transport endpoint is not connected 00:17:21.209 [2024-07-12 11:21:47.052491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a8f90 (9): Bad file descriptor 00:17:21.209 [2024-07-12 11:21:47.053489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:21.209 [2024-07-12 11:21:47.053508] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:21.209 [2024-07-12 11:21:47.053540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:21.209 request: 00:17:21.209 { 00:17:21.209 "name": "TLSTEST", 00:17:21.209 "trtype": "tcp", 00:17:21.209 "traddr": "10.0.0.2", 00:17:21.210 "adrfam": "ipv4", 00:17:21.210 "trsvcid": "4420", 00:17:21.210 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:21.210 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:21.210 "prchk_reftag": false, 00:17:21.210 "prchk_guard": false, 00:17:21.210 "hdgst": false, 00:17:21.210 "ddgst": false, 00:17:21.210 "psk": "/tmp/tmp.S25tEOOl2q", 00:17:21.210 "method": "bdev_nvme_attach_controller", 00:17:21.210 "req_id": 1 00:17:21.210 } 00:17:21.210 Got JSON-RPC error response 00:17:21.210 response: 00:17:21.210 { 00:17:21.210 "code": -5, 00:17:21.210 "message": "Input/output error" 00:17:21.210 } 00:17:21.210 11:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 595093 00:17:21.210 11:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 595093 ']' 00:17:21.210 11:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 595093 00:17:21.210 11:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:21.210 11:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:21.210 11:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 595093 00:17:21.210 11:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:21.210 11:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:21.210 11:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 595093' 00:17:21.210 killing process with pid 595093 00:17:21.210 11:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 595093 00:17:21.210 Received shutdown signal, test time was about 10.000000 seconds 00:17:21.210 00:17:21.210 Latency(us) 00:17:21.210 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.210 =================================================================================================================== 00:17:21.210 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:21.210 [2024-07-12 11:21:47.104009] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:21.210 11:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 595093 00:17:21.468 11:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:21.468 11:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:21.468 11:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:21.468 11:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:21.468 11:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:21.468 11:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:21.468 11:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:21.468 11:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:21.468 11:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:21.468 11:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:21.468 11:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:21.468 11:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:21.468 11:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:21.468 11:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:21.468 11:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:21.468 11:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:21.468 11:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:21.468 11:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:21.468 11:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=595233 00:17:21.468 11:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:21.468 11:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:21.468 11:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 595233 /var/tmp/bdevperf.sock 00:17:21.468 11:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 595233 ']' 00:17:21.468 11:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:21.468 11:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:21.468 11:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:21.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:21.468 11:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:21.468 11:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:21.468 [2024-07-12 11:21:47.397108] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:17:21.468 [2024-07-12 11:21:47.397199] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid595233 ] 00:17:21.468 EAL: No free 2048 kB hugepages reported on node 1 00:17:21.468 [2024-07-12 11:21:47.455124] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.468 [2024-07-12 11:21:47.563266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:21.726 11:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:21.726 11:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:21.726 11:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:22.033 [2024-07-12 11:21:47.882884] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:22.033 [2024-07-12 11:21:47.884295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d6770 (9): Bad file descriptor 00:17:22.033 [2024-07-12 11:21:47.885292] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:22.033 [2024-07-12 11:21:47.885312] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:22.033 [2024-07-12 11:21:47.885344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:22.033 request: 00:17:22.033 { 00:17:22.033 "name": "TLSTEST", 00:17:22.033 "trtype": "tcp", 00:17:22.033 "traddr": "10.0.0.2", 00:17:22.033 "adrfam": "ipv4", 00:17:22.033 "trsvcid": "4420", 00:17:22.033 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:22.033 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:22.033 "prchk_reftag": false, 00:17:22.033 "prchk_guard": false, 00:17:22.033 "hdgst": false, 00:17:22.033 "ddgst": false, 00:17:22.033 "method": "bdev_nvme_attach_controller", 00:17:22.034 "req_id": 1 00:17:22.034 } 00:17:22.034 Got JSON-RPC error response 00:17:22.034 response: 00:17:22.034 { 00:17:22.034 "code": -5, 00:17:22.034 "message": "Input/output error" 00:17:22.034 } 00:17:22.034 11:21:47 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 595233 00:17:22.034 11:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 595233 ']' 00:17:22.034 11:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 595233 00:17:22.034 11:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:22.034 11:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:22.034 11:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 595233 00:17:22.034 11:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:22.034 11:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:22.034 11:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 595233' 00:17:22.034 killing process with pid 595233 00:17:22.034 11:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 595233 00:17:22.034 Received shutdown signal, test time was about 10.000000 seconds 00:17:22.034 00:17:22.034 Latency(us) 00:17:22.034 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.034 =================================================================================================================== 00:17:22.034 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:22.034 11:21:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 595233 00:17:22.292 11:21:48 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:22.292 11:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:22.292 11:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:22.292 11:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:22.292 11:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:22.292 11:21:48 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 591718 00:17:22.292 11:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 591718 ']' 00:17:22.292 11:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 591718 00:17:22.292 11:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:22.292 11:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:22.292 11:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 591718 00:17:22.292 11:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:22.292 11:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:22.292 11:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 591718' 00:17:22.292 killing process with pid 591718 00:17:22.292 11:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 591718 00:17:22.292 [2024-07-12 11:21:48.221655] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:22.292 11:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 591718 00:17:22.551 11:21:48 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:22.551 11:21:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:22.551 11:21:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:22.551 11:21:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:22.551 11:21:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:22.551 11:21:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:17:22.551 11:21:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:22.551 11:21:48 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:22.551 11:21:48 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:17:22.551 11:21:48 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.qljnWVlTvm 00:17:22.551 11:21:48 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:22.551 11:21:48 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.qljnWVlTvm 00:17:22.551 11:21:48 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:17:22.551 11:21:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:22.551 11:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:22.551 11:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:22.551 11:21:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=595379 00:17:22.551 11:21:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:22.551 11:21:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 595379 00:17:22.551 11:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 595379 ']' 00:17:22.551 11:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.551 11:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:22.551 11:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.551 11:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:22.551 11:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:22.551 [2024-07-12 11:21:48.600308] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:17:22.551 [2024-07-12 11:21:48.600398] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.551 EAL: No free 2048 kB hugepages reported on node 1 00:17:22.551 [2024-07-12 11:21:48.663476] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.809 [2024-07-12 11:21:48.764616] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:22.809 [2024-07-12 11:21:48.764688] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:22.809 [2024-07-12 11:21:48.764702] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:22.809 [2024-07-12 11:21:48.764712] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:22.809 [2024-07-12 11:21:48.764721] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:22.809 [2024-07-12 11:21:48.764759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.809 11:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:22.809 11:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:22.809 11:21:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:22.809 11:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:22.809 11:21:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:22.809 11:21:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:22.809 11:21:48 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.qljnWVlTvm 00:17:22.809 11:21:48 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.qljnWVlTvm 00:17:22.809 11:21:48 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:23.067 [2024-07-12 11:21:49.140749] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:23.067 11:21:49 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:23.324 11:21:49 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:23.582 [2024-07-12 11:21:49.682225] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:23.582 [2024-07-12 11:21:49.682465] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:23.582 11:21:49 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:23.840 malloc0 00:17:23.840 11:21:49 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:24.405 11:21:50 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.qljnWVlTvm 00:17:24.405 [2024-07-12 11:21:50.514724] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:24.405 11:21:50 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qljnWVlTvm 00:17:24.405 11:21:50 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:24.405 11:21:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:24.405 11:21:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:24.405 11:21:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.qljnWVlTvm' 00:17:24.405 11:21:50 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:24.405 11:21:50 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=595662 00:17:24.405 11:21:50 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:24.405 11:21:50 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:24.405 11:21:50 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 595662 /var/tmp/bdevperf.sock 00:17:24.405 11:21:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 595662 ']' 00:17:24.405 11:21:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:24.405 11:21:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:24.405 11:21:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:24.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:24.405 11:21:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:24.405 11:21:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:24.663 [2024-07-12 11:21:50.579160] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:17:24.663 [2024-07-12 11:21:50.579253] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid595662 ] 00:17:24.663 EAL: No free 2048 kB hugepages reported on node 1 00:17:24.663 [2024-07-12 11:21:50.654640] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.663 [2024-07-12 11:21:50.791142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:24.921 11:21:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:24.921 11:21:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:24.921 11:21:50 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.qljnWVlTvm 00:17:25.178 [2024-07-12 11:21:51.138916] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:25.178 [2024-07-12 11:21:51.139045] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:25.178 TLSTESTn1 00:17:25.178 11:21:51 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:25.436 Running I/O for 10 seconds... 00:17:35.402 00:17:35.402 Latency(us) 00:17:35.402 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.402 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:35.402 Verification LBA range: start 0x0 length 0x2000 00:17:35.402 TLSTESTn1 : 10.02 3429.15 13.40 0.00 0.00 37261.70 8398.32 29709.65 00:17:35.402 =================================================================================================================== 00:17:35.402 Total : 3429.15 13.40 0.00 0.00 37261.70 8398.32 29709.65 00:17:35.402 0 00:17:35.402 11:22:01 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:35.402 11:22:01 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 595662 00:17:35.402 11:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 595662 ']' 00:17:35.402 11:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 595662 00:17:35.402 11:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:35.402 11:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:35.402 11:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 595662 00:17:35.402 11:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:35.402 11:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:35.402 11:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 595662' 00:17:35.402 killing process with pid 595662 00:17:35.402 11:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 595662 00:17:35.402 Received shutdown signal, test time was about 10.000000 seconds 00:17:35.402 00:17:35.402 Latency(us) 00:17:35.402 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.402 =================================================================================================================== 00:17:35.402 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:35.402 [2024-07-12 11:22:01.428028] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:35.402 11:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 595662 00:17:35.660 11:22:01 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.qljnWVlTvm 00:17:35.660 11:22:01 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qljnWVlTvm 00:17:35.660 11:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:35.660 11:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qljnWVlTvm 00:17:35.660 11:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:35.660 11:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:35.660 11:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:35.660 11:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:35.660 11:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qljnWVlTvm 00:17:35.660 11:22:01 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:35.660 11:22:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:35.660 11:22:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:35.660 11:22:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.qljnWVlTvm' 00:17:35.660 11:22:01 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:35.660 11:22:01 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=596977 00:17:35.660 11:22:01 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:35.660 11:22:01 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:35.660 11:22:01 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 596977 /var/tmp/bdevperf.sock 00:17:35.660 11:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 596977 ']' 00:17:35.660 11:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:35.660 11:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:35.660 11:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:35.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:35.660 11:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:35.660 11:22:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:35.660 [2024-07-12 11:22:01.742883] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:17:35.660 [2024-07-12 11:22:01.742977] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid596977 ] 00:17:35.660 EAL: No free 2048 kB hugepages reported on node 1 00:17:35.919 [2024-07-12 11:22:01.803310] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.919 [2024-07-12 11:22:01.912318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:35.919 11:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:35.919 11:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:35.919 11:22:02 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.qljnWVlTvm 00:17:36.176 [2024-07-12 11:22:02.267571] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:36.176 [2024-07-12 11:22:02.267652] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:36.176 [2024-07-12 11:22:02.267666] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.qljnWVlTvm 00:17:36.176 request: 00:17:36.176 { 00:17:36.176 "name": "TLSTEST", 00:17:36.176 "trtype": "tcp", 00:17:36.176 "traddr": "10.0.0.2", 00:17:36.176 "adrfam": "ipv4", 00:17:36.176 "trsvcid": "4420", 00:17:36.176 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:36.176 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:36.176 "prchk_reftag": false, 00:17:36.176 "prchk_guard": false, 00:17:36.176 "hdgst": false, 00:17:36.176 "ddgst": false, 00:17:36.176 "psk": "/tmp/tmp.qljnWVlTvm", 00:17:36.176 "method": "bdev_nvme_attach_controller", 00:17:36.176 "req_id": 1 00:17:36.176 } 00:17:36.176 Got JSON-RPC error response 00:17:36.176 response: 00:17:36.176 { 00:17:36.176 "code": -1, 00:17:36.176 "message": "Operation not permitted" 00:17:36.176 } 00:17:36.176 11:22:02 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 596977 00:17:36.176 11:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 596977 ']' 00:17:36.176 11:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 596977 00:17:36.176 11:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:36.176 11:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:36.176 11:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 596977 00:17:36.433 11:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:36.433 11:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:36.433 11:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 596977' 00:17:36.433 killing process with pid 596977 00:17:36.433 11:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 596977 00:17:36.433 Received shutdown signal, test time was about 10.000000 seconds 00:17:36.433 00:17:36.433 Latency(us) 00:17:36.433 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.433 =================================================================================================================== 00:17:36.433 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:36.433 11:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 596977 00:17:36.692 11:22:02 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:36.692 11:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:36.692 11:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:36.692 11:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:36.692 11:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:36.692 11:22:02 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 595379 00:17:36.692 11:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 595379 ']' 00:17:36.692 11:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 595379 00:17:36.692 11:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:36.692 11:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:36.692 11:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 595379 00:17:36.692 11:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:36.692 11:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:36.692 11:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 595379' 00:17:36.692 killing process with pid 595379 00:17:36.692 11:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 595379 00:17:36.692 [2024-07-12 11:22:02.609151] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:36.692 11:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 595379 00:17:36.950 11:22:02 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:17:36.950 11:22:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:36.950 11:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:36.950 11:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:36.950 11:22:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=597124 00:17:36.950 11:22:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:36.950 11:22:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 597124 00:17:36.950 11:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 597124 ']' 00:17:36.950 11:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.950 11:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:36.950 11:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.950 11:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:36.950 11:22:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:36.950 [2024-07-12 11:22:02.941524] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:17:36.950 [2024-07-12 11:22:02.941633] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:36.950 EAL: No free 2048 kB hugepages reported on node 1 00:17:36.950 [2024-07-12 11:22:03.006290] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.207 [2024-07-12 11:22:03.109915] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:37.207 [2024-07-12 11:22:03.109981] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:37.207 [2024-07-12 11:22:03.110005] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:37.207 [2024-07-12 11:22:03.110016] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:37.207 [2024-07-12 11:22:03.110025] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:37.207 [2024-07-12 11:22:03.110049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.207 11:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:37.207 11:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:37.207 11:22:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:37.207 11:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:37.207 11:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:37.207 11:22:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:37.207 11:22:03 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.qljnWVlTvm 00:17:37.207 11:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:37.207 11:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.qljnWVlTvm 00:17:37.207 11:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:17:37.207 11:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:37.207 11:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:17:37.207 11:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:37.207 11:22:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.qljnWVlTvm 00:17:37.207 11:22:03 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.qljnWVlTvm 00:17:37.207 11:22:03 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:37.464 [2024-07-12 11:22:03.459226] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:37.464 11:22:03 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:37.722 11:22:03 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:37.979 [2024-07-12 11:22:03.940420] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:37.979 [2024-07-12 11:22:03.940635] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:37.979 11:22:03 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:38.236 malloc0 00:17:38.236 11:22:04 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:38.493 11:22:04 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.qljnWVlTvm 00:17:38.750 [2024-07-12 11:22:04.664492] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:38.750 [2024-07-12 11:22:04.664533] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:38.750 [2024-07-12 11:22:04.664574] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:17:38.750 request: 00:17:38.750 { 00:17:38.750 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.750 "host": "nqn.2016-06.io.spdk:host1", 00:17:38.750 "psk": "/tmp/tmp.qljnWVlTvm", 00:17:38.750 "method": "nvmf_subsystem_add_host", 00:17:38.750 "req_id": 1 00:17:38.750 } 00:17:38.750 Got JSON-RPC error response 00:17:38.750 response: 00:17:38.750 { 00:17:38.750 "code": -32603, 00:17:38.750 "message": "Internal error" 00:17:38.750 } 00:17:38.750 11:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:38.750 11:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:38.750 11:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:38.750 11:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:38.751 11:22:04 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 597124 00:17:38.751 11:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 597124 ']' 00:17:38.751 11:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 597124 00:17:38.751 11:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:38.751 11:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:38.751 11:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 597124 00:17:38.751 11:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:38.751 11:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:38.751 11:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 597124' 00:17:38.751 killing process with pid 597124 00:17:38.751 11:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 597124 00:17:38.751 11:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 597124 00:17:39.008 11:22:04 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.qljnWVlTvm 00:17:39.008 11:22:04 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:39.008 11:22:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:39.008 11:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:39.008 11:22:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:39.008 11:22:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=597352 00:17:39.008 11:22:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:39.008 11:22:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 597352 00:17:39.008 11:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 597352 ']' 00:17:39.008 11:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.008 11:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:39.008 11:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.008 11:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:39.008 11:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:39.008 [2024-07-12 11:22:05.052541] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:17:39.008 [2024-07-12 11:22:05.052633] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:39.008 EAL: No free 2048 kB hugepages reported on node 1 00:17:39.008 [2024-07-12 11:22:05.116011] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.266 [2024-07-12 11:22:05.222118] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:39.266 [2024-07-12 11:22:05.222185] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:39.266 [2024-07-12 11:22:05.222207] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:39.266 [2024-07-12 11:22:05.222217] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:39.266 [2024-07-12 11:22:05.222227] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:39.266 [2024-07-12 11:22:05.222252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.266 11:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:39.266 11:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:39.266 11:22:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:39.266 11:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:39.266 11:22:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:39.266 11:22:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:39.266 11:22:05 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.qljnWVlTvm 00:17:39.266 11:22:05 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.qljnWVlTvm 00:17:39.266 11:22:05 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:39.524 [2024-07-12 11:22:05.579316] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:39.524 11:22:05 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:39.781 11:22:05 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:40.039 [2024-07-12 11:22:06.112832] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:40.039 [2024-07-12 11:22:06.113081] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:40.039 11:22:06 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:40.297 malloc0 00:17:40.297 11:22:06 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:40.554 11:22:06 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.qljnWVlTvm 00:17:40.811 [2024-07-12 11:22:06.864972] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:40.811 11:22:06 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=597591 00:17:40.811 11:22:06 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:40.811 11:22:06 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:40.811 11:22:06 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 597591 /var/tmp/bdevperf.sock 00:17:40.811 11:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 597591 ']' 00:17:40.811 11:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:40.811 11:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:40.811 11:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:40.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:40.811 11:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:40.811 11:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:40.811 [2024-07-12 11:22:06.924293] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:17:40.811 [2024-07-12 11:22:06.924380] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid597591 ] 00:17:41.069 EAL: No free 2048 kB hugepages reported on node 1 00:17:41.069 [2024-07-12 11:22:06.982911] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.069 [2024-07-12 11:22:07.087472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:41.069 11:22:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:41.069 11:22:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:41.069 11:22:07 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.qljnWVlTvm 00:17:41.327 [2024-07-12 11:22:07.429078] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:41.327 [2024-07-12 11:22:07.429203] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:41.585 TLSTESTn1 00:17:41.585 11:22:07 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:17:41.844 11:22:07 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:17:41.844 "subsystems": [ 00:17:41.844 { 00:17:41.844 "subsystem": "keyring", 00:17:41.844 "config": [] 00:17:41.844 }, 00:17:41.844 { 00:17:41.844 "subsystem": "iobuf", 00:17:41.844 "config": [ 00:17:41.844 { 00:17:41.844 "method": "iobuf_set_options", 00:17:41.844 "params": { 00:17:41.844 "small_pool_count": 8192, 00:17:41.844 "large_pool_count": 1024, 00:17:41.844 "small_bufsize": 8192, 00:17:41.844 "large_bufsize": 135168 00:17:41.844 } 00:17:41.844 } 00:17:41.844 ] 00:17:41.844 }, 00:17:41.844 { 00:17:41.844 "subsystem": "sock", 00:17:41.844 "config": [ 00:17:41.844 { 00:17:41.844 "method": "sock_set_default_impl", 00:17:41.844 "params": { 00:17:41.844 "impl_name": "posix" 00:17:41.844 } 00:17:41.844 }, 00:17:41.844 { 00:17:41.844 "method": "sock_impl_set_options", 00:17:41.844 "params": { 00:17:41.844 "impl_name": "ssl", 00:17:41.844 "recv_buf_size": 4096, 00:17:41.844 "send_buf_size": 4096, 00:17:41.844 "enable_recv_pipe": true, 00:17:41.844 "enable_quickack": false, 00:17:41.844 "enable_placement_id": 0, 00:17:41.844 "enable_zerocopy_send_server": true, 00:17:41.844 "enable_zerocopy_send_client": false, 00:17:41.844 "zerocopy_threshold": 0, 00:17:41.844 "tls_version": 0, 00:17:41.844 "enable_ktls": false 00:17:41.844 } 00:17:41.844 }, 00:17:41.844 { 00:17:41.844 "method": "sock_impl_set_options", 00:17:41.844 "params": { 00:17:41.844 "impl_name": "posix", 00:17:41.844 "recv_buf_size": 2097152, 00:17:41.844 "send_buf_size": 2097152, 00:17:41.844 "enable_recv_pipe": true, 00:17:41.844 "enable_quickack": false, 00:17:41.844 "enable_placement_id": 0, 00:17:41.844 "enable_zerocopy_send_server": true, 00:17:41.844 "enable_zerocopy_send_client": false, 00:17:41.844 "zerocopy_threshold": 0, 00:17:41.844 "tls_version": 0, 00:17:41.844 "enable_ktls": false 00:17:41.844 } 00:17:41.844 } 00:17:41.844 ] 00:17:41.844 }, 00:17:41.844 { 00:17:41.844 "subsystem": "vmd", 00:17:41.844 "config": [] 00:17:41.844 }, 00:17:41.844 { 00:17:41.844 "subsystem": "accel", 00:17:41.844 "config": [ 00:17:41.844 { 00:17:41.844 "method": "accel_set_options", 00:17:41.844 "params": { 00:17:41.844 "small_cache_size": 128, 00:17:41.844 "large_cache_size": 16, 00:17:41.844 "task_count": 2048, 00:17:41.844 "sequence_count": 2048, 00:17:41.844 "buf_count": 2048 00:17:41.844 } 00:17:41.844 } 00:17:41.844 ] 00:17:41.844 }, 00:17:41.844 { 00:17:41.844 "subsystem": "bdev", 00:17:41.844 "config": [ 00:17:41.844 { 00:17:41.844 "method": "bdev_set_options", 00:17:41.844 "params": { 00:17:41.844 "bdev_io_pool_size": 65535, 00:17:41.844 "bdev_io_cache_size": 256, 00:17:41.844 "bdev_auto_examine": true, 00:17:41.844 "iobuf_small_cache_size": 128, 00:17:41.844 "iobuf_large_cache_size": 16 00:17:41.844 } 00:17:41.844 }, 00:17:41.844 { 00:17:41.844 "method": "bdev_raid_set_options", 00:17:41.844 "params": { 00:17:41.844 "process_window_size_kb": 1024 00:17:41.844 } 00:17:41.844 }, 00:17:41.844 { 00:17:41.844 "method": "bdev_iscsi_set_options", 00:17:41.844 "params": { 00:17:41.844 "timeout_sec": 30 00:17:41.844 } 00:17:41.844 }, 00:17:41.844 { 00:17:41.844 "method": "bdev_nvme_set_options", 00:17:41.844 "params": { 00:17:41.844 "action_on_timeout": "none", 00:17:41.844 "timeout_us": 0, 00:17:41.844 "timeout_admin_us": 0, 00:17:41.844 "keep_alive_timeout_ms": 10000, 00:17:41.844 "arbitration_burst": 0, 00:17:41.844 "low_priority_weight": 0, 00:17:41.844 "medium_priority_weight": 0, 00:17:41.844 "high_priority_weight": 0, 00:17:41.844 "nvme_adminq_poll_period_us": 10000, 00:17:41.844 "nvme_ioq_poll_period_us": 0, 00:17:41.844 "io_queue_requests": 0, 00:17:41.844 "delay_cmd_submit": true, 00:17:41.844 "transport_retry_count": 4, 00:17:41.844 "bdev_retry_count": 3, 00:17:41.844 "transport_ack_timeout": 0, 00:17:41.844 "ctrlr_loss_timeout_sec": 0, 00:17:41.844 "reconnect_delay_sec": 0, 00:17:41.844 "fast_io_fail_timeout_sec": 0, 00:17:41.844 "disable_auto_failback": false, 00:17:41.844 "generate_uuids": false, 00:17:41.844 "transport_tos": 0, 00:17:41.844 "nvme_error_stat": false, 00:17:41.844 "rdma_srq_size": 0, 00:17:41.844 "io_path_stat": false, 00:17:41.844 "allow_accel_sequence": false, 00:17:41.844 "rdma_max_cq_size": 0, 00:17:41.844 "rdma_cm_event_timeout_ms": 0, 00:17:41.844 "dhchap_digests": [ 00:17:41.844 "sha256", 00:17:41.844 "sha384", 00:17:41.844 "sha512" 00:17:41.844 ], 00:17:41.844 "dhchap_dhgroups": [ 00:17:41.844 "null", 00:17:41.844 "ffdhe2048", 00:17:41.844 "ffdhe3072", 00:17:41.844 "ffdhe4096", 00:17:41.844 "ffdhe6144", 00:17:41.844 "ffdhe8192" 00:17:41.844 ] 00:17:41.844 } 00:17:41.844 }, 00:17:41.844 { 00:17:41.844 "method": "bdev_nvme_set_hotplug", 00:17:41.844 "params": { 00:17:41.844 "period_us": 100000, 00:17:41.844 "enable": false 00:17:41.844 } 00:17:41.844 }, 00:17:41.844 { 00:17:41.844 "method": "bdev_malloc_create", 00:17:41.844 "params": { 00:17:41.844 "name": "malloc0", 00:17:41.844 "num_blocks": 8192, 00:17:41.844 "block_size": 4096, 00:17:41.844 "physical_block_size": 4096, 00:17:41.844 "uuid": "6589a6b0-1449-4c92-8afd-632222169eb6", 00:17:41.844 "optimal_io_boundary": 0 00:17:41.844 } 00:17:41.844 }, 00:17:41.844 { 00:17:41.844 "method": "bdev_wait_for_examine" 00:17:41.844 } 00:17:41.844 ] 00:17:41.844 }, 00:17:41.844 { 00:17:41.844 "subsystem": "nbd", 00:17:41.844 "config": [] 00:17:41.844 }, 00:17:41.844 { 00:17:41.844 "subsystem": "scheduler", 00:17:41.844 "config": [ 00:17:41.844 { 00:17:41.844 "method": "framework_set_scheduler", 00:17:41.844 "params": { 00:17:41.844 "name": "static" 00:17:41.844 } 00:17:41.844 } 00:17:41.844 ] 00:17:41.845 }, 00:17:41.845 { 00:17:41.845 "subsystem": "nvmf", 00:17:41.845 "config": [ 00:17:41.845 { 00:17:41.845 "method": "nvmf_set_config", 00:17:41.845 "params": { 00:17:41.845 "discovery_filter": "match_any", 00:17:41.845 "admin_cmd_passthru": { 00:17:41.845 "identify_ctrlr": false 00:17:41.845 } 00:17:41.845 } 00:17:41.845 }, 00:17:41.845 { 00:17:41.845 "method": "nvmf_set_max_subsystems", 00:17:41.845 "params": { 00:17:41.845 "max_subsystems": 1024 00:17:41.845 } 00:17:41.845 }, 00:17:41.845 { 00:17:41.845 "method": "nvmf_set_crdt", 00:17:41.845 "params": { 00:17:41.845 "crdt1": 0, 00:17:41.845 "crdt2": 0, 00:17:41.845 "crdt3": 0 00:17:41.845 } 00:17:41.845 }, 00:17:41.845 { 00:17:41.845 "method": "nvmf_create_transport", 00:17:41.845 "params": { 00:17:41.845 "trtype": "TCP", 00:17:41.845 "max_queue_depth": 128, 00:17:41.845 "max_io_qpairs_per_ctrlr": 127, 00:17:41.845 "in_capsule_data_size": 4096, 00:17:41.845 "max_io_size": 131072, 00:17:41.845 "io_unit_size": 131072, 00:17:41.845 "max_aq_depth": 128, 00:17:41.845 "num_shared_buffers": 511, 00:17:41.845 "buf_cache_size": 4294967295, 00:17:41.845 "dif_insert_or_strip": false, 00:17:41.845 "zcopy": false, 00:17:41.845 "c2h_success": false, 00:17:41.845 "sock_priority": 0, 00:17:41.845 "abort_timeout_sec": 1, 00:17:41.845 "ack_timeout": 0, 00:17:41.845 "data_wr_pool_size": 0 00:17:41.845 } 00:17:41.845 }, 00:17:41.845 { 00:17:41.845 "method": "nvmf_create_subsystem", 00:17:41.845 "params": { 00:17:41.845 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:41.845 "allow_any_host": false, 00:17:41.845 "serial_number": "SPDK00000000000001", 00:17:41.845 "model_number": "SPDK bdev Controller", 00:17:41.845 "max_namespaces": 10, 00:17:41.845 "min_cntlid": 1, 00:17:41.845 "max_cntlid": 65519, 00:17:41.845 "ana_reporting": false 00:17:41.845 } 00:17:41.845 }, 00:17:41.845 { 00:17:41.845 "method": "nvmf_subsystem_add_host", 00:17:41.845 "params": { 00:17:41.845 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:41.845 "host": "nqn.2016-06.io.spdk:host1", 00:17:41.845 "psk": "/tmp/tmp.qljnWVlTvm" 00:17:41.845 } 00:17:41.845 }, 00:17:41.845 { 00:17:41.845 "method": "nvmf_subsystem_add_ns", 00:17:41.845 "params": { 00:17:41.845 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:41.845 "namespace": { 00:17:41.845 "nsid": 1, 00:17:41.845 "bdev_name": "malloc0", 00:17:41.845 "nguid": "6589A6B014494C928AFD632222169EB6", 00:17:41.845 "uuid": "6589a6b0-1449-4c92-8afd-632222169eb6", 00:17:41.845 "no_auto_visible": false 00:17:41.845 } 00:17:41.845 } 00:17:41.845 }, 00:17:41.845 { 00:17:41.845 "method": "nvmf_subsystem_add_listener", 00:17:41.845 "params": { 00:17:41.845 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:41.845 "listen_address": { 00:17:41.845 "trtype": "TCP", 00:17:41.845 "adrfam": "IPv4", 00:17:41.845 "traddr": "10.0.0.2", 00:17:41.845 "trsvcid": "4420" 00:17:41.845 }, 00:17:41.845 "secure_channel": true 00:17:41.845 } 00:17:41.845 } 00:17:41.845 ] 00:17:41.845 } 00:17:41.845 ] 00:17:41.845 }' 00:17:41.845 11:22:07 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:42.103 11:22:08 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:17:42.103 "subsystems": [ 00:17:42.103 { 00:17:42.103 "subsystem": "keyring", 00:17:42.103 "config": [] 00:17:42.103 }, 00:17:42.103 { 00:17:42.103 "subsystem": "iobuf", 00:17:42.103 "config": [ 00:17:42.103 { 00:17:42.103 "method": "iobuf_set_options", 00:17:42.103 "params": { 00:17:42.103 "small_pool_count": 8192, 00:17:42.103 "large_pool_count": 1024, 00:17:42.103 "small_bufsize": 8192, 00:17:42.103 "large_bufsize": 135168 00:17:42.103 } 00:17:42.103 } 00:17:42.103 ] 00:17:42.103 }, 00:17:42.103 { 00:17:42.103 "subsystem": "sock", 00:17:42.103 "config": [ 00:17:42.103 { 00:17:42.103 "method": "sock_set_default_impl", 00:17:42.103 "params": { 00:17:42.103 "impl_name": "posix" 00:17:42.103 } 00:17:42.103 }, 00:17:42.103 { 00:17:42.103 "method": "sock_impl_set_options", 00:17:42.103 "params": { 00:17:42.103 "impl_name": "ssl", 00:17:42.103 "recv_buf_size": 4096, 00:17:42.103 "send_buf_size": 4096, 00:17:42.103 "enable_recv_pipe": true, 00:17:42.103 "enable_quickack": false, 00:17:42.103 "enable_placement_id": 0, 00:17:42.103 "enable_zerocopy_send_server": true, 00:17:42.103 "enable_zerocopy_send_client": false, 00:17:42.103 "zerocopy_threshold": 0, 00:17:42.103 "tls_version": 0, 00:17:42.103 "enable_ktls": false 00:17:42.103 } 00:17:42.103 }, 00:17:42.103 { 00:17:42.103 "method": "sock_impl_set_options", 00:17:42.103 "params": { 00:17:42.103 "impl_name": "posix", 00:17:42.103 "recv_buf_size": 2097152, 00:17:42.103 "send_buf_size": 2097152, 00:17:42.103 "enable_recv_pipe": true, 00:17:42.103 "enable_quickack": false, 00:17:42.103 "enable_placement_id": 0, 00:17:42.103 "enable_zerocopy_send_server": true, 00:17:42.103 "enable_zerocopy_send_client": false, 00:17:42.103 "zerocopy_threshold": 0, 00:17:42.103 "tls_version": 0, 00:17:42.103 "enable_ktls": false 00:17:42.103 } 00:17:42.103 } 00:17:42.103 ] 00:17:42.103 }, 00:17:42.103 { 00:17:42.103 "subsystem": "vmd", 00:17:42.103 "config": [] 00:17:42.103 }, 00:17:42.103 { 00:17:42.103 "subsystem": "accel", 00:17:42.103 "config": [ 00:17:42.103 { 00:17:42.103 "method": "accel_set_options", 00:17:42.103 "params": { 00:17:42.103 "small_cache_size": 128, 00:17:42.103 "large_cache_size": 16, 00:17:42.103 "task_count": 2048, 00:17:42.103 "sequence_count": 2048, 00:17:42.103 "buf_count": 2048 00:17:42.103 } 00:17:42.103 } 00:17:42.103 ] 00:17:42.103 }, 00:17:42.103 { 00:17:42.103 "subsystem": "bdev", 00:17:42.103 "config": [ 00:17:42.103 { 00:17:42.103 "method": "bdev_set_options", 00:17:42.103 "params": { 00:17:42.103 "bdev_io_pool_size": 65535, 00:17:42.103 "bdev_io_cache_size": 256, 00:17:42.103 "bdev_auto_examine": true, 00:17:42.104 "iobuf_small_cache_size": 128, 00:17:42.104 "iobuf_large_cache_size": 16 00:17:42.104 } 00:17:42.104 }, 00:17:42.104 { 00:17:42.104 "method": "bdev_raid_set_options", 00:17:42.104 "params": { 00:17:42.104 "process_window_size_kb": 1024 00:17:42.104 } 00:17:42.104 }, 00:17:42.104 { 00:17:42.104 "method": "bdev_iscsi_set_options", 00:17:42.104 "params": { 00:17:42.104 "timeout_sec": 30 00:17:42.104 } 00:17:42.104 }, 00:17:42.104 { 00:17:42.104 "method": "bdev_nvme_set_options", 00:17:42.104 "params": { 00:17:42.104 "action_on_timeout": "none", 00:17:42.104 "timeout_us": 0, 00:17:42.104 "timeout_admin_us": 0, 00:17:42.104 "keep_alive_timeout_ms": 10000, 00:17:42.104 "arbitration_burst": 0, 00:17:42.104 "low_priority_weight": 0, 00:17:42.104 "medium_priority_weight": 0, 00:17:42.104 "high_priority_weight": 0, 00:17:42.104 "nvme_adminq_poll_period_us": 10000, 00:17:42.104 "nvme_ioq_poll_period_us": 0, 00:17:42.104 "io_queue_requests": 512, 00:17:42.104 "delay_cmd_submit": true, 00:17:42.104 "transport_retry_count": 4, 00:17:42.104 "bdev_retry_count": 3, 00:17:42.104 "transport_ack_timeout": 0, 00:17:42.104 "ctrlr_loss_timeout_sec": 0, 00:17:42.104 "reconnect_delay_sec": 0, 00:17:42.104 "fast_io_fail_timeout_sec": 0, 00:17:42.104 "disable_auto_failback": false, 00:17:42.104 "generate_uuids": false, 00:17:42.104 "transport_tos": 0, 00:17:42.104 "nvme_error_stat": false, 00:17:42.104 "rdma_srq_size": 0, 00:17:42.104 "io_path_stat": false, 00:17:42.104 "allow_accel_sequence": false, 00:17:42.104 "rdma_max_cq_size": 0, 00:17:42.104 "rdma_cm_event_timeout_ms": 0, 00:17:42.104 "dhchap_digests": [ 00:17:42.104 "sha256", 00:17:42.104 "sha384", 00:17:42.104 "sha512" 00:17:42.104 ], 00:17:42.104 "dhchap_dhgroups": [ 00:17:42.104 "null", 00:17:42.104 "ffdhe2048", 00:17:42.104 "ffdhe3072", 00:17:42.104 "ffdhe4096", 00:17:42.104 "ffdhe6144", 00:17:42.104 "ffdhe8192" 00:17:42.104 ] 00:17:42.104 } 00:17:42.104 }, 00:17:42.104 { 00:17:42.104 "method": "bdev_nvme_attach_controller", 00:17:42.104 "params": { 00:17:42.104 "name": "TLSTEST", 00:17:42.104 "trtype": "TCP", 00:17:42.104 "adrfam": "IPv4", 00:17:42.104 "traddr": "10.0.0.2", 00:17:42.104 "trsvcid": "4420", 00:17:42.104 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:42.104 "prchk_reftag": false, 00:17:42.104 "prchk_guard": false, 00:17:42.104 "ctrlr_loss_timeout_sec": 0, 00:17:42.104 "reconnect_delay_sec": 0, 00:17:42.104 "fast_io_fail_timeout_sec": 0, 00:17:42.104 "psk": "/tmp/tmp.qljnWVlTvm", 00:17:42.104 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:42.104 "hdgst": false, 00:17:42.104 "ddgst": false 00:17:42.104 } 00:17:42.104 }, 00:17:42.104 { 00:17:42.104 "method": "bdev_nvme_set_hotplug", 00:17:42.104 "params": { 00:17:42.104 "period_us": 100000, 00:17:42.104 "enable": false 00:17:42.104 } 00:17:42.104 }, 00:17:42.104 { 00:17:42.104 "method": "bdev_wait_for_examine" 00:17:42.104 } 00:17:42.104 ] 00:17:42.104 }, 00:17:42.104 { 00:17:42.104 "subsystem": "nbd", 00:17:42.104 "config": [] 00:17:42.104 } 00:17:42.104 ] 00:17:42.104 }' 00:17:42.104 11:22:08 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 597591 00:17:42.104 11:22:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 597591 ']' 00:17:42.104 11:22:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 597591 00:17:42.104 11:22:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:42.104 11:22:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:42.104 11:22:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 597591 00:17:42.104 11:22:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:42.104 11:22:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:42.104 11:22:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 597591' 00:17:42.104 killing process with pid 597591 00:17:42.104 11:22:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 597591 00:17:42.104 Received shutdown signal, test time was about 10.000000 seconds 00:17:42.104 00:17:42.104 Latency(us) 00:17:42.104 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.104 =================================================================================================================== 00:17:42.104 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:42.104 [2024-07-12 11:22:08.169055] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:42.104 11:22:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 597591 00:17:42.362 11:22:08 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 597352 00:17:42.362 11:22:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 597352 ']' 00:17:42.362 11:22:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 597352 00:17:42.362 11:22:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:42.362 11:22:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:42.362 11:22:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 597352 00:17:42.362 11:22:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:42.362 11:22:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:42.362 11:22:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 597352' 00:17:42.362 killing process with pid 597352 00:17:42.362 11:22:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 597352 00:17:42.362 [2024-07-12 11:22:08.437635] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:42.362 11:22:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 597352 00:17:42.622 11:22:08 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:42.622 11:22:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:42.622 11:22:08 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:17:42.622 "subsystems": [ 00:17:42.622 { 00:17:42.622 "subsystem": "keyring", 00:17:42.622 "config": [] 00:17:42.622 }, 00:17:42.622 { 00:17:42.622 "subsystem": "iobuf", 00:17:42.622 "config": [ 00:17:42.622 { 00:17:42.622 "method": "iobuf_set_options", 00:17:42.622 "params": { 00:17:42.622 "small_pool_count": 8192, 00:17:42.622 "large_pool_count": 1024, 00:17:42.622 "small_bufsize": 8192, 00:17:42.622 "large_bufsize": 135168 00:17:42.622 } 00:17:42.622 } 00:17:42.622 ] 00:17:42.622 }, 00:17:42.622 { 00:17:42.622 "subsystem": "sock", 00:17:42.622 "config": [ 00:17:42.622 { 00:17:42.622 "method": "sock_set_default_impl", 00:17:42.622 "params": { 00:17:42.622 "impl_name": "posix" 00:17:42.622 } 00:17:42.622 }, 00:17:42.622 { 00:17:42.622 "method": "sock_impl_set_options", 00:17:42.622 "params": { 00:17:42.622 "impl_name": "ssl", 00:17:42.622 "recv_buf_size": 4096, 00:17:42.622 "send_buf_size": 4096, 00:17:42.622 "enable_recv_pipe": true, 00:17:42.622 "enable_quickack": false, 00:17:42.622 "enable_placement_id": 0, 00:17:42.622 "enable_zerocopy_send_server": true, 00:17:42.622 "enable_zerocopy_send_client": false, 00:17:42.622 "zerocopy_threshold": 0, 00:17:42.622 "tls_version": 0, 00:17:42.622 "enable_ktls": false 00:17:42.622 } 00:17:42.622 }, 00:17:42.622 { 00:17:42.622 "method": "sock_impl_set_options", 00:17:42.622 "params": { 00:17:42.622 "impl_name": "posix", 00:17:42.622 "recv_buf_size": 2097152, 00:17:42.622 "send_buf_size": 2097152, 00:17:42.622 "enable_recv_pipe": true, 00:17:42.622 "enable_quickack": false, 00:17:42.622 "enable_placement_id": 0, 00:17:42.622 "enable_zerocopy_send_server": true, 00:17:42.622 "enable_zerocopy_send_client": false, 00:17:42.622 "zerocopy_threshold": 0, 00:17:42.622 "tls_version": 0, 00:17:42.622 "enable_ktls": false 00:17:42.622 } 00:17:42.622 } 00:17:42.622 ] 00:17:42.622 }, 00:17:42.622 { 00:17:42.623 "subsystem": "vmd", 00:17:42.623 "config": [] 00:17:42.623 }, 00:17:42.623 { 00:17:42.623 "subsystem": "accel", 00:17:42.623 "config": [ 00:17:42.623 { 00:17:42.623 "method": "accel_set_options", 00:17:42.623 "params": { 00:17:42.623 "small_cache_size": 128, 00:17:42.623 "large_cache_size": 16, 00:17:42.623 "task_count": 2048, 00:17:42.623 "sequence_count": 2048, 00:17:42.623 "buf_count": 2048 00:17:42.623 } 00:17:42.623 } 00:17:42.623 ] 00:17:42.623 }, 00:17:42.623 { 00:17:42.623 "subsystem": "bdev", 00:17:42.623 "config": [ 00:17:42.623 { 00:17:42.623 "method": "bdev_set_options", 00:17:42.623 "params": { 00:17:42.623 "bdev_io_pool_size": 65535, 00:17:42.623 "bdev_io_cache_size": 256, 00:17:42.623 "bdev_auto_examine": true, 00:17:42.623 "iobuf_small_cache_size": 128, 00:17:42.623 "iobuf_large_cache_size": 16 00:17:42.623 } 00:17:42.623 }, 00:17:42.623 { 00:17:42.623 "method": "bdev_raid_set_options", 00:17:42.623 "params": { 00:17:42.623 "process_window_size_kb": 1024 00:17:42.623 } 00:17:42.623 }, 00:17:42.623 { 00:17:42.623 "method": "bdev_iscsi_set_options", 00:17:42.623 "params": { 00:17:42.623 "timeout_sec": 30 00:17:42.623 } 00:17:42.623 }, 00:17:42.623 { 00:17:42.623 "method": "bdev_nvme_set_options", 00:17:42.623 "params": { 00:17:42.623 "action_on_timeout": "none", 00:17:42.623 "timeout_us": 0, 00:17:42.623 "timeout_admin_us": 0, 00:17:42.623 "keep_alive_timeout_ms": 10000, 00:17:42.623 "arbitration_burst": 0, 00:17:42.623 "low_priority_weight": 0, 00:17:42.623 "medium_priority_weight": 0, 00:17:42.623 "high_priority_weight": 0, 00:17:42.623 "nvme_adminq_poll_period_us": 10000, 00:17:42.623 "nvme_ioq_poll_period_us": 0, 00:17:42.623 "io_queue_requests": 0, 00:17:42.623 "delay_cmd_submit": true, 00:17:42.623 "transport_retry_count": 4, 00:17:42.623 "bdev_retry_count": 3, 00:17:42.623 "transport_ack_timeout": 0, 00:17:42.623 "ctrlr_loss_timeout_sec": 0, 00:17:42.623 "reconnect_delay_sec": 0, 00:17:42.623 "fast_io_fail_timeout_sec": 0, 00:17:42.623 "disable_auto_failback": false, 00:17:42.623 "generate_uuids": false, 00:17:42.623 "transport_tos": 0, 00:17:42.623 "nvme_error_stat": false, 00:17:42.623 "rdma_srq_size": 0, 00:17:42.623 "io_path_stat": false, 00:17:42.623 "allow_accel_sequence": false, 00:17:42.623 "rdma_max_cq_size": 0, 00:17:42.623 "rdma_cm_event_timeout_ms": 0, 00:17:42.623 "dhchap_digests": [ 00:17:42.623 "sha256", 00:17:42.623 "sha384", 00:17:42.623 "sha512" 00:17:42.623 ], 00:17:42.623 "dhchap_dhgroups": [ 00:17:42.623 "null", 00:17:42.623 "ffdhe2048", 00:17:42.623 "ffdhe3072", 00:17:42.623 "ffdhe4096", 00:17:42.623 "ffdhe6144", 00:17:42.623 "ffdhe8192" 00:17:42.623 ] 00:17:42.623 } 00:17:42.623 }, 00:17:42.623 { 00:17:42.623 "method": "bdev_nvme_set_hotplug", 00:17:42.623 "params": { 00:17:42.623 "period_us": 100000, 00:17:42.623 "enable": false 00:17:42.623 } 00:17:42.623 }, 00:17:42.623 { 00:17:42.623 "method": "bdev_malloc_create", 00:17:42.623 "params": { 00:17:42.623 "name": "malloc0", 00:17:42.623 "num_blocks": 8192, 00:17:42.623 "block_size": 4096, 00:17:42.623 "physical_block_size": 4096, 00:17:42.623 "uuid": "6589a6b0-1449-4c92-8afd-632222169eb6", 00:17:42.623 "optimal_io_boundary": 0 00:17:42.623 } 00:17:42.623 }, 00:17:42.623 { 00:17:42.623 "method": "bdev_wait_for_examine" 00:17:42.623 } 00:17:42.623 ] 00:17:42.623 }, 00:17:42.623 { 00:17:42.623 "subsystem": "nbd", 00:17:42.623 "config": [] 00:17:42.623 }, 00:17:42.623 { 00:17:42.623 "subsystem": "scheduler", 00:17:42.623 "config": [ 00:17:42.623 { 00:17:42.623 "method": "framework_set_scheduler", 00:17:42.623 "params": { 00:17:42.623 "name": "static" 00:17:42.623 } 00:17:42.623 } 00:17:42.623 ] 00:17:42.623 }, 00:17:42.623 { 00:17:42.623 "subsystem": "nvmf", 00:17:42.623 "config": [ 00:17:42.623 { 00:17:42.623 "method": "nvmf_set_config", 00:17:42.623 "params": { 00:17:42.623 "discovery_filter": "match_any", 00:17:42.623 "admin_cmd_passthru": { 00:17:42.623 "identify_ctrlr": false 00:17:42.623 } 00:17:42.623 } 00:17:42.623 }, 00:17:42.623 { 00:17:42.623 "method": "nvmf_set_max_subsystems", 00:17:42.623 "params": { 00:17:42.623 "max_subsystems": 1024 00:17:42.623 } 00:17:42.623 }, 00:17:42.623 { 00:17:42.623 "method": "nvmf_set_crdt", 00:17:42.623 "params": { 00:17:42.623 "crdt1": 0, 00:17:42.623 "crdt2": 0, 00:17:42.623 "crdt3": 0 00:17:42.623 } 00:17:42.623 }, 00:17:42.623 { 00:17:42.623 "method": "nvmf_create_transport", 00:17:42.623 "params": { 00:17:42.623 "trtype": "TCP", 00:17:42.623 "max_queue_depth": 128, 00:17:42.623 "max_io_qpairs_per_ctrlr": 127, 00:17:42.623 "in_capsule_data_size": 4096, 00:17:42.623 "max_io_size": 131072, 00:17:42.623 "io_unit_size": 131072, 00:17:42.623 "max_aq_depth": 128, 00:17:42.623 "num_shared_buffers": 511, 00:17:42.623 "buf_cache_size": 4294967295, 00:17:42.623 "dif_insert_or_strip": false, 00:17:42.623 "zcopy": false, 00:17:42.623 "c2h_success": false, 00:17:42.623 "sock_priority": 0, 00:17:42.623 "abort_timeout_sec": 1, 00:17:42.623 "ack_timeout": 0, 00:17:42.623 "data_wr_pool_size": 0 00:17:42.623 } 00:17:42.623 }, 00:17:42.623 { 00:17:42.623 "method": "nvmf_create_subsystem", 00:17:42.623 "params": { 00:17:42.623 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:42.623 "allow_any_host": false, 00:17:42.623 "serial_number": "SPDK00000000000001", 00:17:42.623 "model_number": "SPDK bdev Controller", 00:17:42.623 "max_namespaces": 10, 00:17:42.623 "min_cntlid": 1, 00:17:42.623 "max_cntlid": 65519, 00:17:42.623 "ana_reporting": false 00:17:42.623 } 00:17:42.623 }, 00:17:42.623 { 00:17:42.623 "method": "nvmf_subsystem_add_host", 00:17:42.623 "params": { 00:17:42.623 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:42.623 "host": "nqn.2016-06.io.spdk:host1", 00:17:42.623 "psk": "/tmp/tmp.qljnWVlTvm" 00:17:42.623 } 00:17:42.623 }, 00:17:42.623 { 00:17:42.623 "method": "nvmf_subsystem_add_ns", 00:17:42.623 "params": { 00:17:42.623 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:42.623 "namespace": { 00:17:42.623 "nsid": 1, 00:17:42.623 "bdev_name": "malloc0", 00:17:42.623 "nguid": "6589A6B014494C928AFD632222169EB6", 00:17:42.623 "uuid": "6589a6b0-1449-4c92-8afd-632222169eb6", 00:17:42.623 "no_auto_visible": false 00:17:42.623 } 00:17:42.623 } 00:17:42.623 }, 00:17:42.623 { 00:17:42.623 "method": "nvmf_subsystem_add_listener", 00:17:42.623 "params": { 00:17:42.623 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:42.623 "listen_address": { 00:17:42.623 "trtype": "TCP", 00:17:42.623 "adrfam": "IPv4", 00:17:42.623 "traddr": "10.0.0.2", 00:17:42.623 "trsvcid": "4420" 00:17:42.623 }, 00:17:42.623 "secure_channel": true 00:17:42.623 } 00:17:42.623 } 00:17:42.623 ] 00:17:42.623 } 00:17:42.623 ] 00:17:42.623 }' 00:17:42.623 11:22:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:42.623 11:22:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:42.623 11:22:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=597857 00:17:42.623 11:22:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:42.623 11:22:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 597857 00:17:42.623 11:22:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 597857 ']' 00:17:42.623 11:22:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.623 11:22:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:42.623 11:22:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.623 11:22:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:42.623 11:22:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:42.882 [2024-07-12 11:22:08.767762] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:17:42.882 [2024-07-12 11:22:08.767850] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:42.882 EAL: No free 2048 kB hugepages reported on node 1 00:17:42.882 [2024-07-12 11:22:08.829900] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.882 [2024-07-12 11:22:08.930985] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:42.882 [2024-07-12 11:22:08.931043] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:42.882 [2024-07-12 11:22:08.931064] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:42.882 [2024-07-12 11:22:08.931075] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:42.882 [2024-07-12 11:22:08.931084] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:42.882 [2024-07-12 11:22:08.931159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.140 [2024-07-12 11:22:09.158270] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:43.140 [2024-07-12 11:22:09.174240] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:43.140 [2024-07-12 11:22:09.190266] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:43.140 [2024-07-12 11:22:09.198054] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:43.703 11:22:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:43.703 11:22:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:43.703 11:22:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:43.703 11:22:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:43.703 11:22:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:43.703 11:22:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.703 11:22:09 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=598010 00:17:43.703 11:22:09 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 598010 /var/tmp/bdevperf.sock 00:17:43.703 11:22:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 598010 ']' 00:17:43.703 11:22:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:43.703 11:22:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:43.703 11:22:09 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:43.703 11:22:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:43.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:43.703 11:22:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:43.703 11:22:09 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:17:43.703 "subsystems": [ 00:17:43.703 { 00:17:43.703 "subsystem": "keyring", 00:17:43.703 "config": [] 00:17:43.703 }, 00:17:43.703 { 00:17:43.703 "subsystem": "iobuf", 00:17:43.703 "config": [ 00:17:43.703 { 00:17:43.703 "method": "iobuf_set_options", 00:17:43.703 "params": { 00:17:43.703 "small_pool_count": 8192, 00:17:43.703 "large_pool_count": 1024, 00:17:43.703 "small_bufsize": 8192, 00:17:43.703 "large_bufsize": 135168 00:17:43.703 } 00:17:43.703 } 00:17:43.703 ] 00:17:43.703 }, 00:17:43.703 { 00:17:43.703 "subsystem": "sock", 00:17:43.703 "config": [ 00:17:43.703 { 00:17:43.703 "method": "sock_set_default_impl", 00:17:43.703 "params": { 00:17:43.703 "impl_name": "posix" 00:17:43.703 } 00:17:43.703 }, 00:17:43.703 { 00:17:43.703 "method": "sock_impl_set_options", 00:17:43.703 "params": { 00:17:43.703 "impl_name": "ssl", 00:17:43.703 "recv_buf_size": 4096, 00:17:43.703 "send_buf_size": 4096, 00:17:43.703 "enable_recv_pipe": true, 00:17:43.703 "enable_quickack": false, 00:17:43.704 "enable_placement_id": 0, 00:17:43.704 "enable_zerocopy_send_server": true, 00:17:43.704 "enable_zerocopy_send_client": false, 00:17:43.704 "zerocopy_threshold": 0, 00:17:43.704 "tls_version": 0, 00:17:43.704 "enable_ktls": false 00:17:43.704 } 00:17:43.704 }, 00:17:43.704 { 00:17:43.704 "method": "sock_impl_set_options", 00:17:43.704 "params": { 00:17:43.704 "impl_name": "posix", 00:17:43.704 "recv_buf_size": 2097152, 00:17:43.704 "send_buf_size": 2097152, 00:17:43.704 "enable_recv_pipe": true, 00:17:43.704 "enable_quickack": false, 00:17:43.704 "enable_placement_id": 0, 00:17:43.704 "enable_zerocopy_send_server": true, 00:17:43.704 "enable_zerocopy_send_client": false, 00:17:43.704 "zerocopy_threshold": 0, 00:17:43.704 "tls_version": 0, 00:17:43.704 "enable_ktls": false 00:17:43.704 } 00:17:43.704 } 00:17:43.704 ] 00:17:43.704 }, 00:17:43.704 { 00:17:43.704 "subsystem": "vmd", 00:17:43.704 "config": [] 00:17:43.704 }, 00:17:43.704 { 00:17:43.704 "subsystem": "accel", 00:17:43.704 "config": [ 00:17:43.704 { 00:17:43.704 "method": "accel_set_options", 00:17:43.704 "params": { 00:17:43.704 "small_cache_size": 128, 00:17:43.704 "large_cache_size": 16, 00:17:43.704 "task_count": 2048, 00:17:43.704 "sequence_count": 2048, 00:17:43.704 "buf_count": 2048 00:17:43.704 } 00:17:43.704 } 00:17:43.704 ] 00:17:43.704 }, 00:17:43.704 { 00:17:43.704 "subsystem": "bdev", 00:17:43.704 "config": [ 00:17:43.704 { 00:17:43.704 "method": "bdev_set_options", 00:17:43.704 "params": { 00:17:43.704 "bdev_io_pool_size": 65535, 00:17:43.704 "bdev_io_cache_size": 256, 00:17:43.704 "bdev_auto_examine": true, 00:17:43.704 "iobuf_small_cache_size": 128, 00:17:43.704 "iobuf_large_cache_size": 16 00:17:43.704 } 00:17:43.704 }, 00:17:43.704 { 00:17:43.704 "method": "bdev_raid_set_options", 00:17:43.704 "params": { 00:17:43.704 "process_window_size_kb": 1024 00:17:43.704 } 00:17:43.704 }, 00:17:43.704 { 00:17:43.704 "method": "bdev_iscsi_set_options", 00:17:43.704 "params": { 00:17:43.704 "timeout_sec": 30 00:17:43.704 } 00:17:43.704 }, 00:17:43.704 { 00:17:43.704 "method": "bdev_nvme_set_options", 00:17:43.704 "params": { 00:17:43.704 "action_on_timeout": "none", 00:17:43.704 "timeout_us": 0, 00:17:43.704 "timeout_admin_us": 0, 00:17:43.704 "keep_alive_timeout_ms": 10000, 00:17:43.704 "arbitration_burst": 0, 00:17:43.704 "low_priority_weight": 0, 00:17:43.704 "medium_priority_weight": 0, 00:17:43.704 "high_priority_weight": 0, 00:17:43.704 "nvme_adminq_poll_period_us": 10000, 00:17:43.704 "nvme_ioq_poll_period_us": 0, 00:17:43.704 "io_queue_requests": 512, 00:17:43.704 "delay_cmd_submit": true, 00:17:43.704 "transport_retry_count": 4, 00:17:43.704 "bdev_retry_count": 3, 00:17:43.704 "transport_ack_timeout": 0, 00:17:43.704 "ctrlr_loss_timeout_sec": 0, 00:17:43.704 "reconnect_delay_sec": 0, 00:17:43.704 "fast_io_fail_timeout_sec": 0, 00:17:43.704 "disable_auto_failback": false, 00:17:43.704 "generate_uuids": false, 00:17:43.704 "transport_tos": 0, 00:17:43.704 "nvme_error_stat": false, 00:17:43.704 "rdma_srq_size": 0, 00:17:43.704 "io_path_stat": false, 00:17:43.704 "allow_accel_sequence": false, 00:17:43.704 "rdma_max_cq_size": 0, 00:17:43.704 "rdma_cm_event_timeout_ms": 0, 00:17:43.704 "dhchap_digests": [ 00:17:43.704 "sha256", 00:17:43.704 "sha384", 00:17:43.704 "sha512" 00:17:43.704 ], 00:17:43.704 "dhchap_dhgroups": [ 00:17:43.704 "null", 00:17:43.704 "ffdhe2048", 00:17:43.704 "ffdhe3072", 00:17:43.704 "ffdhe4096", 00:17:43.704 "ffdhe6144", 00:17:43.704 "ffdhe8192" 00:17:43.704 ] 00:17:43.704 } 00:17:43.704 }, 00:17:43.704 { 00:17:43.704 "method": "bdev_nvme_attach_controller", 00:17:43.704 "params": { 00:17:43.704 "name": "TLSTEST", 00:17:43.704 "trtype": "TCP", 00:17:43.704 "adrfam": "IPv4", 00:17:43.704 "traddr": "10.0.0.2", 00:17:43.704 "trsvcid": "4420", 00:17:43.704 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:43.704 "prchk_reftag": false, 00:17:43.704 "prchk_guard": false, 00:17:43.704 "ctrlr_loss_timeout_sec": 0, 00:17:43.704 "reconnect_delay_sec": 0, 00:17:43.704 "fast_io_fail_timeout_sec": 0, 00:17:43.704 "psk": "/tmp/tmp.qljnWVlTvm", 00:17:43.704 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:43.704 "hdgst": false, 00:17:43.704 "ddgst": false 00:17:43.704 } 00:17:43.704 }, 00:17:43.704 { 00:17:43.704 "method": "bdev_nvme_set_hotplug", 00:17:43.704 "params": { 00:17:43.704 "period_us": 100000, 00:17:43.704 "enable": false 00:17:43.704 } 00:17:43.704 }, 00:17:43.704 { 00:17:43.704 "method": "bdev_wait_for_examine" 00:17:43.704 } 00:17:43.704 ] 00:17:43.704 }, 00:17:43.704 { 00:17:43.704 "subsystem": "nbd", 00:17:43.704 "config": [] 00:17:43.704 } 00:17:43.704 ] 00:17:43.704 }' 00:17:43.704 11:22:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:43.704 [2024-07-12 11:22:09.758771] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:17:43.704 [2024-07-12 11:22:09.758860] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid598010 ] 00:17:43.704 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.704 [2024-07-12 11:22:09.817750] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.961 [2024-07-12 11:22:09.927665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:43.961 [2024-07-12 11:22:10.091335] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:43.961 [2024-07-12 11:22:10.091470] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:44.891 11:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:44.891 11:22:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:44.891 11:22:10 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:44.891 Running I/O for 10 seconds... 00:17:54.852 00:17:54.852 Latency(us) 00:17:54.852 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.852 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:54.852 Verification LBA range: start 0x0 length 0x2000 00:17:54.852 TLSTESTn1 : 10.04 3454.12 13.49 0.00 0.00 36967.30 7767.23 36700.16 00:17:54.852 =================================================================================================================== 00:17:54.852 Total : 3454.12 13.49 0.00 0.00 36967.30 7767.23 36700.16 00:17:54.852 0 00:17:54.852 11:22:20 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:54.852 11:22:20 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 598010 00:17:54.852 11:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 598010 ']' 00:17:54.852 11:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 598010 00:17:54.852 11:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:54.852 11:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:54.852 11:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 598010 00:17:55.110 11:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:55.110 11:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:55.110 11:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 598010' 00:17:55.110 killing process with pid 598010 00:17:55.110 11:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 598010 00:17:55.110 Received shutdown signal, test time was about 10.000000 seconds 00:17:55.110 00:17:55.110 Latency(us) 00:17:55.110 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.110 =================================================================================================================== 00:17:55.110 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:55.110 [2024-07-12 11:22:20.990465] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:55.110 11:22:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 598010 00:17:55.368 11:22:21 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 597857 00:17:55.368 11:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 597857 ']' 00:17:55.368 11:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 597857 00:17:55.368 11:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:55.368 11:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:55.368 11:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 597857 00:17:55.368 11:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:55.368 11:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:55.368 11:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 597857' 00:17:55.368 killing process with pid 597857 00:17:55.368 11:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 597857 00:17:55.368 [2024-07-12 11:22:21.290343] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:55.369 11:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 597857 00:17:55.626 11:22:21 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:17:55.626 11:22:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:55.626 11:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:55.627 11:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:55.627 11:22:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=599341 00:17:55.627 11:22:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:55.627 11:22:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 599341 00:17:55.627 11:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 599341 ']' 00:17:55.627 11:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.627 11:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:55.627 11:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.627 11:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:55.627 11:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:55.627 [2024-07-12 11:22:21.626511] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:17:55.627 [2024-07-12 11:22:21.626606] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:55.627 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.627 [2024-07-12 11:22:21.690310] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.885 [2024-07-12 11:22:21.797219] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:55.885 [2024-07-12 11:22:21.797272] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:55.885 [2024-07-12 11:22:21.797292] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:55.885 [2024-07-12 11:22:21.797303] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:55.885 [2024-07-12 11:22:21.797312] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:55.885 [2024-07-12 11:22:21.797342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.885 11:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:55.885 11:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:55.885 11:22:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:55.885 11:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:55.885 11:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:55.885 11:22:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:55.885 11:22:21 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.qljnWVlTvm 00:17:55.885 11:22:21 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.qljnWVlTvm 00:17:55.885 11:22:21 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:56.142 [2024-07-12 11:22:22.148630] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:56.142 11:22:22 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:56.399 11:22:22 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:56.656 [2024-07-12 11:22:22.617846] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:56.656 [2024-07-12 11:22:22.618038] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:56.656 11:22:22 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:56.914 malloc0 00:17:56.914 11:22:22 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:57.171 11:22:23 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.qljnWVlTvm 00:17:57.428 [2024-07-12 11:22:23.354264] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:57.428 11:22:23 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=599626 00:17:57.428 11:22:23 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:57.428 11:22:23 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:57.428 11:22:23 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 599626 /var/tmp/bdevperf.sock 00:17:57.428 11:22:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 599626 ']' 00:17:57.428 11:22:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:57.428 11:22:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:57.428 11:22:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:57.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:57.428 11:22:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:57.428 11:22:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:57.428 [2024-07-12 11:22:23.411645] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:17:57.428 [2024-07-12 11:22:23.411728] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid599626 ] 00:17:57.428 EAL: No free 2048 kB hugepages reported on node 1 00:17:57.428 [2024-07-12 11:22:23.469956] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.685 [2024-07-12 11:22:23.578566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.685 11:22:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:57.685 11:22:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:57.685 11:22:23 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qljnWVlTvm 00:17:57.971 11:22:23 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:58.252 [2024-07-12 11:22:24.150126] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:58.252 nvme0n1 00:17:58.252 11:22:24 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:58.252 Running I/O for 1 seconds... 00:17:59.625 00:17:59.625 Latency(us) 00:17:59.625 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.625 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:59.625 Verification LBA range: start 0x0 length 0x2000 00:17:59.626 nvme0n1 : 1.02 3574.80 13.96 0.00 0.00 35460.46 6262.33 30292.20 00:17:59.626 =================================================================================================================== 00:17:59.626 Total : 3574.80 13.96 0.00 0.00 35460.46 6262.33 30292.20 00:17:59.626 0 00:17:59.626 11:22:25 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 599626 00:17:59.626 11:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 599626 ']' 00:17:59.626 11:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 599626 00:17:59.626 11:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:59.626 11:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:59.626 11:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 599626 00:17:59.626 11:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:59.626 11:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:59.626 11:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 599626' 00:17:59.626 killing process with pid 599626 00:17:59.626 11:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 599626 00:17:59.626 Received shutdown signal, test time was about 1.000000 seconds 00:17:59.626 00:17:59.626 Latency(us) 00:17:59.626 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.626 =================================================================================================================== 00:17:59.626 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:59.626 11:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 599626 00:17:59.626 11:22:25 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 599341 00:17:59.626 11:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 599341 ']' 00:17:59.626 11:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 599341 00:17:59.626 11:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:59.626 11:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:59.626 11:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 599341 00:17:59.626 11:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:59.626 11:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:59.626 11:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 599341' 00:17:59.626 killing process with pid 599341 00:17:59.626 11:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 599341 00:17:59.626 [2024-07-12 11:22:25.705569] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:59.626 11:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 599341 00:17:59.886 11:22:25 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:17:59.886 11:22:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:59.886 11:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:59.886 11:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:59.886 11:22:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=599906 00:17:59.886 11:22:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:59.886 11:22:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 599906 00:17:59.886 11:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 599906 ']' 00:17:59.886 11:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.886 11:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:59.886 11:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.886 11:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:59.886 11:22:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.144 [2024-07-12 11:22:26.038660] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:18:00.144 [2024-07-12 11:22:26.038761] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.144 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.144 [2024-07-12 11:22:26.102421] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.144 [2024-07-12 11:22:26.201142] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:00.144 [2024-07-12 11:22:26.201212] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:00.144 [2024-07-12 11:22:26.201235] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:00.144 [2024-07-12 11:22:26.201245] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:00.144 [2024-07-12 11:22:26.201255] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:00.144 [2024-07-12 11:22:26.201279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.402 11:22:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:00.402 11:22:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:00.402 11:22:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:00.402 11:22:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:00.402 11:22:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.402 11:22:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:00.402 11:22:26 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:18:00.402 11:22:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.402 11:22:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.402 [2024-07-12 11:22:26.336691] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:00.402 malloc0 00:18:00.402 [2024-07-12 11:22:26.368523] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:00.402 [2024-07-12 11:22:26.368732] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:00.402 11:22:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.402 11:22:26 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=599935 00:18:00.402 11:22:26 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:00.402 11:22:26 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 599935 /var/tmp/bdevperf.sock 00:18:00.402 11:22:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 599935 ']' 00:18:00.402 11:22:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:00.402 11:22:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:00.402 11:22:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:00.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:00.403 11:22:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:00.403 11:22:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.403 [2024-07-12 11:22:26.436089] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:18:00.403 [2024-07-12 11:22:26.436153] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid599935 ] 00:18:00.403 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.403 [2024-07-12 11:22:26.492915] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.660 [2024-07-12 11:22:26.603238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.660 11:22:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:00.660 11:22:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:00.660 11:22:26 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qljnWVlTvm 00:18:00.917 11:22:27 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:01.175 [2024-07-12 11:22:27.287424] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:01.432 nvme0n1 00:18:01.432 11:22:27 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:01.432 Running I/O for 1 seconds... 00:18:02.803 00:18:02.803 Latency(us) 00:18:02.803 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.803 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:02.803 Verification LBA range: start 0x0 length 0x2000 00:18:02.803 nvme0n1 : 1.02 3412.65 13.33 0.00 0.00 37131.91 6213.78 41166.32 00:18:02.803 =================================================================================================================== 00:18:02.803 Total : 3412.65 13.33 0.00 0.00 37131.91 6213.78 41166.32 00:18:02.803 0 00:18:02.803 11:22:28 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:18:02.803 11:22:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.803 11:22:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:02.803 11:22:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.803 11:22:28 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:18:02.803 "subsystems": [ 00:18:02.803 { 00:18:02.803 "subsystem": "keyring", 00:18:02.803 "config": [ 00:18:02.803 { 00:18:02.803 "method": "keyring_file_add_key", 00:18:02.803 "params": { 00:18:02.803 "name": "key0", 00:18:02.803 "path": "/tmp/tmp.qljnWVlTvm" 00:18:02.803 } 00:18:02.803 } 00:18:02.803 ] 00:18:02.803 }, 00:18:02.803 { 00:18:02.803 "subsystem": "iobuf", 00:18:02.803 "config": [ 00:18:02.803 { 00:18:02.803 "method": "iobuf_set_options", 00:18:02.803 "params": { 00:18:02.803 "small_pool_count": 8192, 00:18:02.804 "large_pool_count": 1024, 00:18:02.804 "small_bufsize": 8192, 00:18:02.804 "large_bufsize": 135168 00:18:02.804 } 00:18:02.804 } 00:18:02.804 ] 00:18:02.804 }, 00:18:02.804 { 00:18:02.804 "subsystem": "sock", 00:18:02.804 "config": [ 00:18:02.804 { 00:18:02.804 "method": "sock_set_default_impl", 00:18:02.804 "params": { 00:18:02.804 "impl_name": "posix" 00:18:02.804 } 00:18:02.804 }, 00:18:02.804 { 00:18:02.804 "method": "sock_impl_set_options", 00:18:02.804 "params": { 00:18:02.804 "impl_name": "ssl", 00:18:02.804 "recv_buf_size": 4096, 00:18:02.804 "send_buf_size": 4096, 00:18:02.804 "enable_recv_pipe": true, 00:18:02.804 "enable_quickack": false, 00:18:02.804 "enable_placement_id": 0, 00:18:02.804 "enable_zerocopy_send_server": true, 00:18:02.804 "enable_zerocopy_send_client": false, 00:18:02.804 "zerocopy_threshold": 0, 00:18:02.804 "tls_version": 0, 00:18:02.804 "enable_ktls": false 00:18:02.804 } 00:18:02.804 }, 00:18:02.804 { 00:18:02.804 "method": "sock_impl_set_options", 00:18:02.804 "params": { 00:18:02.804 "impl_name": "posix", 00:18:02.804 "recv_buf_size": 2097152, 00:18:02.804 "send_buf_size": 2097152, 00:18:02.804 "enable_recv_pipe": true, 00:18:02.804 "enable_quickack": false, 00:18:02.804 "enable_placement_id": 0, 00:18:02.804 "enable_zerocopy_send_server": true, 00:18:02.804 "enable_zerocopy_send_client": false, 00:18:02.804 "zerocopy_threshold": 0, 00:18:02.804 "tls_version": 0, 00:18:02.804 "enable_ktls": false 00:18:02.804 } 00:18:02.804 } 00:18:02.804 ] 00:18:02.804 }, 00:18:02.804 { 00:18:02.804 "subsystem": "vmd", 00:18:02.804 "config": [] 00:18:02.804 }, 00:18:02.804 { 00:18:02.804 "subsystem": "accel", 00:18:02.804 "config": [ 00:18:02.804 { 00:18:02.804 "method": "accel_set_options", 00:18:02.804 "params": { 00:18:02.804 "small_cache_size": 128, 00:18:02.804 "large_cache_size": 16, 00:18:02.804 "task_count": 2048, 00:18:02.804 "sequence_count": 2048, 00:18:02.804 "buf_count": 2048 00:18:02.804 } 00:18:02.804 } 00:18:02.804 ] 00:18:02.804 }, 00:18:02.804 { 00:18:02.804 "subsystem": "bdev", 00:18:02.804 "config": [ 00:18:02.804 { 00:18:02.804 "method": "bdev_set_options", 00:18:02.804 "params": { 00:18:02.804 "bdev_io_pool_size": 65535, 00:18:02.804 "bdev_io_cache_size": 256, 00:18:02.804 "bdev_auto_examine": true, 00:18:02.804 "iobuf_small_cache_size": 128, 00:18:02.804 "iobuf_large_cache_size": 16 00:18:02.804 } 00:18:02.804 }, 00:18:02.804 { 00:18:02.804 "method": "bdev_raid_set_options", 00:18:02.804 "params": { 00:18:02.804 "process_window_size_kb": 1024 00:18:02.804 } 00:18:02.804 }, 00:18:02.804 { 00:18:02.804 "method": "bdev_iscsi_set_options", 00:18:02.804 "params": { 00:18:02.804 "timeout_sec": 30 00:18:02.804 } 00:18:02.804 }, 00:18:02.804 { 00:18:02.804 "method": "bdev_nvme_set_options", 00:18:02.804 "params": { 00:18:02.804 "action_on_timeout": "none", 00:18:02.804 "timeout_us": 0, 00:18:02.804 "timeout_admin_us": 0, 00:18:02.804 "keep_alive_timeout_ms": 10000, 00:18:02.804 "arbitration_burst": 0, 00:18:02.804 "low_priority_weight": 0, 00:18:02.804 "medium_priority_weight": 0, 00:18:02.804 "high_priority_weight": 0, 00:18:02.804 "nvme_adminq_poll_period_us": 10000, 00:18:02.804 "nvme_ioq_poll_period_us": 0, 00:18:02.804 "io_queue_requests": 0, 00:18:02.804 "delay_cmd_submit": true, 00:18:02.804 "transport_retry_count": 4, 00:18:02.804 "bdev_retry_count": 3, 00:18:02.804 "transport_ack_timeout": 0, 00:18:02.804 "ctrlr_loss_timeout_sec": 0, 00:18:02.804 "reconnect_delay_sec": 0, 00:18:02.804 "fast_io_fail_timeout_sec": 0, 00:18:02.804 "disable_auto_failback": false, 00:18:02.804 "generate_uuids": false, 00:18:02.804 "transport_tos": 0, 00:18:02.804 "nvme_error_stat": false, 00:18:02.804 "rdma_srq_size": 0, 00:18:02.804 "io_path_stat": false, 00:18:02.804 "allow_accel_sequence": false, 00:18:02.804 "rdma_max_cq_size": 0, 00:18:02.804 "rdma_cm_event_timeout_ms": 0, 00:18:02.804 "dhchap_digests": [ 00:18:02.804 "sha256", 00:18:02.804 "sha384", 00:18:02.804 "sha512" 00:18:02.804 ], 00:18:02.804 "dhchap_dhgroups": [ 00:18:02.804 "null", 00:18:02.804 "ffdhe2048", 00:18:02.804 "ffdhe3072", 00:18:02.804 "ffdhe4096", 00:18:02.804 "ffdhe6144", 00:18:02.804 "ffdhe8192" 00:18:02.804 ] 00:18:02.804 } 00:18:02.804 }, 00:18:02.804 { 00:18:02.804 "method": "bdev_nvme_set_hotplug", 00:18:02.804 "params": { 00:18:02.804 "period_us": 100000, 00:18:02.804 "enable": false 00:18:02.804 } 00:18:02.804 }, 00:18:02.804 { 00:18:02.804 "method": "bdev_malloc_create", 00:18:02.804 "params": { 00:18:02.804 "name": "malloc0", 00:18:02.804 "num_blocks": 8192, 00:18:02.804 "block_size": 4096, 00:18:02.804 "physical_block_size": 4096, 00:18:02.804 "uuid": "0174bc02-6044-4c9f-a14a-d593e0e6ef2f", 00:18:02.804 "optimal_io_boundary": 0 00:18:02.804 } 00:18:02.804 }, 00:18:02.804 { 00:18:02.804 "method": "bdev_wait_for_examine" 00:18:02.804 } 00:18:02.804 ] 00:18:02.804 }, 00:18:02.804 { 00:18:02.804 "subsystem": "nbd", 00:18:02.804 "config": [] 00:18:02.804 }, 00:18:02.804 { 00:18:02.804 "subsystem": "scheduler", 00:18:02.804 "config": [ 00:18:02.804 { 00:18:02.804 "method": "framework_set_scheduler", 00:18:02.804 "params": { 00:18:02.804 "name": "static" 00:18:02.804 } 00:18:02.804 } 00:18:02.804 ] 00:18:02.804 }, 00:18:02.804 { 00:18:02.804 "subsystem": "nvmf", 00:18:02.804 "config": [ 00:18:02.804 { 00:18:02.804 "method": "nvmf_set_config", 00:18:02.804 "params": { 00:18:02.804 "discovery_filter": "match_any", 00:18:02.804 "admin_cmd_passthru": { 00:18:02.804 "identify_ctrlr": false 00:18:02.804 } 00:18:02.804 } 00:18:02.804 }, 00:18:02.804 { 00:18:02.804 "method": "nvmf_set_max_subsystems", 00:18:02.804 "params": { 00:18:02.804 "max_subsystems": 1024 00:18:02.804 } 00:18:02.804 }, 00:18:02.804 { 00:18:02.804 "method": "nvmf_set_crdt", 00:18:02.804 "params": { 00:18:02.804 "crdt1": 0, 00:18:02.804 "crdt2": 0, 00:18:02.804 "crdt3": 0 00:18:02.804 } 00:18:02.804 }, 00:18:02.805 { 00:18:02.805 "method": "nvmf_create_transport", 00:18:02.805 "params": { 00:18:02.805 "trtype": "TCP", 00:18:02.805 "max_queue_depth": 128, 00:18:02.805 "max_io_qpairs_per_ctrlr": 127, 00:18:02.805 "in_capsule_data_size": 4096, 00:18:02.805 "max_io_size": 131072, 00:18:02.805 "io_unit_size": 131072, 00:18:02.805 "max_aq_depth": 128, 00:18:02.805 "num_shared_buffers": 511, 00:18:02.805 "buf_cache_size": 4294967295, 00:18:02.805 "dif_insert_or_strip": false, 00:18:02.805 "zcopy": false, 00:18:02.805 "c2h_success": false, 00:18:02.805 "sock_priority": 0, 00:18:02.805 "abort_timeout_sec": 1, 00:18:02.805 "ack_timeout": 0, 00:18:02.805 "data_wr_pool_size": 0 00:18:02.805 } 00:18:02.805 }, 00:18:02.805 { 00:18:02.805 "method": "nvmf_create_subsystem", 00:18:02.805 "params": { 00:18:02.805 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:02.805 "allow_any_host": false, 00:18:02.805 "serial_number": "00000000000000000000", 00:18:02.805 "model_number": "SPDK bdev Controller", 00:18:02.805 "max_namespaces": 32, 00:18:02.805 "min_cntlid": 1, 00:18:02.805 "max_cntlid": 65519, 00:18:02.805 "ana_reporting": false 00:18:02.805 } 00:18:02.805 }, 00:18:02.805 { 00:18:02.805 "method": "nvmf_subsystem_add_host", 00:18:02.805 "params": { 00:18:02.805 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:02.805 "host": "nqn.2016-06.io.spdk:host1", 00:18:02.805 "psk": "key0" 00:18:02.805 } 00:18:02.805 }, 00:18:02.805 { 00:18:02.805 "method": "nvmf_subsystem_add_ns", 00:18:02.805 "params": { 00:18:02.805 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:02.805 "namespace": { 00:18:02.805 "nsid": 1, 00:18:02.805 "bdev_name": "malloc0", 00:18:02.805 "nguid": "0174BC0260444C9FA14AD593E0E6EF2F", 00:18:02.805 "uuid": "0174bc02-6044-4c9f-a14a-d593e0e6ef2f", 00:18:02.805 "no_auto_visible": false 00:18:02.805 } 00:18:02.805 } 00:18:02.805 }, 00:18:02.805 { 00:18:02.805 "method": "nvmf_subsystem_add_listener", 00:18:02.805 "params": { 00:18:02.805 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:02.805 "listen_address": { 00:18:02.805 "trtype": "TCP", 00:18:02.805 "adrfam": "IPv4", 00:18:02.805 "traddr": "10.0.0.2", 00:18:02.805 "trsvcid": "4420" 00:18:02.805 }, 00:18:02.805 "secure_channel": true 00:18:02.805 } 00:18:02.805 } 00:18:02.805 ] 00:18:02.805 } 00:18:02.805 ] 00:18:02.805 }' 00:18:02.805 11:22:28 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:03.063 11:22:28 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:18:03.063 "subsystems": [ 00:18:03.063 { 00:18:03.063 "subsystem": "keyring", 00:18:03.063 "config": [ 00:18:03.063 { 00:18:03.063 "method": "keyring_file_add_key", 00:18:03.063 "params": { 00:18:03.063 "name": "key0", 00:18:03.063 "path": "/tmp/tmp.qljnWVlTvm" 00:18:03.063 } 00:18:03.063 } 00:18:03.063 ] 00:18:03.063 }, 00:18:03.063 { 00:18:03.063 "subsystem": "iobuf", 00:18:03.063 "config": [ 00:18:03.063 { 00:18:03.063 "method": "iobuf_set_options", 00:18:03.063 "params": { 00:18:03.063 "small_pool_count": 8192, 00:18:03.063 "large_pool_count": 1024, 00:18:03.063 "small_bufsize": 8192, 00:18:03.063 "large_bufsize": 135168 00:18:03.063 } 00:18:03.063 } 00:18:03.063 ] 00:18:03.063 }, 00:18:03.063 { 00:18:03.064 "subsystem": "sock", 00:18:03.064 "config": [ 00:18:03.064 { 00:18:03.064 "method": "sock_set_default_impl", 00:18:03.064 "params": { 00:18:03.064 "impl_name": "posix" 00:18:03.064 } 00:18:03.064 }, 00:18:03.064 { 00:18:03.064 "method": "sock_impl_set_options", 00:18:03.064 "params": { 00:18:03.064 "impl_name": "ssl", 00:18:03.064 "recv_buf_size": 4096, 00:18:03.064 "send_buf_size": 4096, 00:18:03.064 "enable_recv_pipe": true, 00:18:03.064 "enable_quickack": false, 00:18:03.064 "enable_placement_id": 0, 00:18:03.064 "enable_zerocopy_send_server": true, 00:18:03.064 "enable_zerocopy_send_client": false, 00:18:03.064 "zerocopy_threshold": 0, 00:18:03.064 "tls_version": 0, 00:18:03.064 "enable_ktls": false 00:18:03.064 } 00:18:03.064 }, 00:18:03.064 { 00:18:03.064 "method": "sock_impl_set_options", 00:18:03.064 "params": { 00:18:03.064 "impl_name": "posix", 00:18:03.064 "recv_buf_size": 2097152, 00:18:03.064 "send_buf_size": 2097152, 00:18:03.064 "enable_recv_pipe": true, 00:18:03.064 "enable_quickack": false, 00:18:03.064 "enable_placement_id": 0, 00:18:03.064 "enable_zerocopy_send_server": true, 00:18:03.064 "enable_zerocopy_send_client": false, 00:18:03.064 "zerocopy_threshold": 0, 00:18:03.064 "tls_version": 0, 00:18:03.064 "enable_ktls": false 00:18:03.064 } 00:18:03.064 } 00:18:03.064 ] 00:18:03.064 }, 00:18:03.064 { 00:18:03.064 "subsystem": "vmd", 00:18:03.064 "config": [] 00:18:03.064 }, 00:18:03.064 { 00:18:03.064 "subsystem": "accel", 00:18:03.064 "config": [ 00:18:03.064 { 00:18:03.064 "method": "accel_set_options", 00:18:03.064 "params": { 00:18:03.064 "small_cache_size": 128, 00:18:03.064 "large_cache_size": 16, 00:18:03.064 "task_count": 2048, 00:18:03.064 "sequence_count": 2048, 00:18:03.064 "buf_count": 2048 00:18:03.064 } 00:18:03.064 } 00:18:03.064 ] 00:18:03.064 }, 00:18:03.064 { 00:18:03.064 "subsystem": "bdev", 00:18:03.064 "config": [ 00:18:03.064 { 00:18:03.064 "method": "bdev_set_options", 00:18:03.064 "params": { 00:18:03.064 "bdev_io_pool_size": 65535, 00:18:03.064 "bdev_io_cache_size": 256, 00:18:03.064 "bdev_auto_examine": true, 00:18:03.064 "iobuf_small_cache_size": 128, 00:18:03.064 "iobuf_large_cache_size": 16 00:18:03.064 } 00:18:03.064 }, 00:18:03.064 { 00:18:03.064 "method": "bdev_raid_set_options", 00:18:03.064 "params": { 00:18:03.064 "process_window_size_kb": 1024 00:18:03.064 } 00:18:03.064 }, 00:18:03.064 { 00:18:03.064 "method": "bdev_iscsi_set_options", 00:18:03.064 "params": { 00:18:03.064 "timeout_sec": 30 00:18:03.064 } 00:18:03.064 }, 00:18:03.064 { 00:18:03.064 "method": "bdev_nvme_set_options", 00:18:03.064 "params": { 00:18:03.064 "action_on_timeout": "none", 00:18:03.064 "timeout_us": 0, 00:18:03.064 "timeout_admin_us": 0, 00:18:03.064 "keep_alive_timeout_ms": 10000, 00:18:03.064 "arbitration_burst": 0, 00:18:03.064 "low_priority_weight": 0, 00:18:03.064 "medium_priority_weight": 0, 00:18:03.064 "high_priority_weight": 0, 00:18:03.064 "nvme_adminq_poll_period_us": 10000, 00:18:03.064 "nvme_ioq_poll_period_us": 0, 00:18:03.064 "io_queue_requests": 512, 00:18:03.064 "delay_cmd_submit": true, 00:18:03.064 "transport_retry_count": 4, 00:18:03.064 "bdev_retry_count": 3, 00:18:03.064 "transport_ack_timeout": 0, 00:18:03.064 "ctrlr_loss_timeout_sec": 0, 00:18:03.064 "reconnect_delay_sec": 0, 00:18:03.064 "fast_io_fail_timeout_sec": 0, 00:18:03.064 "disable_auto_failback": false, 00:18:03.064 "generate_uuids": false, 00:18:03.064 "transport_tos": 0, 00:18:03.064 "nvme_error_stat": false, 00:18:03.064 "rdma_srq_size": 0, 00:18:03.064 "io_path_stat": false, 00:18:03.064 "allow_accel_sequence": false, 00:18:03.064 "rdma_max_cq_size": 0, 00:18:03.064 "rdma_cm_event_timeout_ms": 0, 00:18:03.064 "dhchap_digests": [ 00:18:03.064 "sha256", 00:18:03.064 "sha384", 00:18:03.064 "sha512" 00:18:03.064 ], 00:18:03.064 "dhchap_dhgroups": [ 00:18:03.064 "null", 00:18:03.064 "ffdhe2048", 00:18:03.064 "ffdhe3072", 00:18:03.064 "ffdhe4096", 00:18:03.064 "ffdhe6144", 00:18:03.064 "ffdhe8192" 00:18:03.064 ] 00:18:03.064 } 00:18:03.064 }, 00:18:03.064 { 00:18:03.064 "method": "bdev_nvme_attach_controller", 00:18:03.064 "params": { 00:18:03.064 "name": "nvme0", 00:18:03.064 "trtype": "TCP", 00:18:03.064 "adrfam": "IPv4", 00:18:03.064 "traddr": "10.0.0.2", 00:18:03.064 "trsvcid": "4420", 00:18:03.064 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.064 "prchk_reftag": false, 00:18:03.064 "prchk_guard": false, 00:18:03.064 "ctrlr_loss_timeout_sec": 0, 00:18:03.064 "reconnect_delay_sec": 0, 00:18:03.064 "fast_io_fail_timeout_sec": 0, 00:18:03.064 "psk": "key0", 00:18:03.064 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:03.064 "hdgst": false, 00:18:03.064 "ddgst": false 00:18:03.064 } 00:18:03.064 }, 00:18:03.064 { 00:18:03.064 "method": "bdev_nvme_set_hotplug", 00:18:03.064 "params": { 00:18:03.064 "period_us": 100000, 00:18:03.064 "enable": false 00:18:03.064 } 00:18:03.064 }, 00:18:03.064 { 00:18:03.064 "method": "bdev_enable_histogram", 00:18:03.064 "params": { 00:18:03.064 "name": "nvme0n1", 00:18:03.064 "enable": true 00:18:03.064 } 00:18:03.064 }, 00:18:03.064 { 00:18:03.064 "method": "bdev_wait_for_examine" 00:18:03.064 } 00:18:03.064 ] 00:18:03.064 }, 00:18:03.064 { 00:18:03.064 "subsystem": "nbd", 00:18:03.064 "config": [] 00:18:03.064 } 00:18:03.064 ] 00:18:03.064 }' 00:18:03.064 11:22:28 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 599935 00:18:03.064 11:22:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 599935 ']' 00:18:03.064 11:22:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 599935 00:18:03.064 11:22:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:03.064 11:22:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:03.064 11:22:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 599935 00:18:03.064 11:22:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:03.064 11:22:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:03.064 11:22:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 599935' 00:18:03.064 killing process with pid 599935 00:18:03.064 11:22:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 599935 00:18:03.064 Received shutdown signal, test time was about 1.000000 seconds 00:18:03.064 00:18:03.064 Latency(us) 00:18:03.064 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.064 =================================================================================================================== 00:18:03.064 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:03.064 11:22:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 599935 00:18:03.321 11:22:29 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 599906 00:18:03.321 11:22:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 599906 ']' 00:18:03.321 11:22:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 599906 00:18:03.321 11:22:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:03.321 11:22:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:03.321 11:22:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 599906 00:18:03.321 11:22:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:03.321 11:22:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:03.321 11:22:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 599906' 00:18:03.321 killing process with pid 599906 00:18:03.321 11:22:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 599906 00:18:03.321 11:22:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 599906 00:18:03.580 11:22:29 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:18:03.580 11:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:03.580 11:22:29 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:18:03.580 "subsystems": [ 00:18:03.580 { 00:18:03.580 "subsystem": "keyring", 00:18:03.580 "config": [ 00:18:03.580 { 00:18:03.580 "method": "keyring_file_add_key", 00:18:03.580 "params": { 00:18:03.580 "name": "key0", 00:18:03.580 "path": "/tmp/tmp.qljnWVlTvm" 00:18:03.580 } 00:18:03.580 } 00:18:03.580 ] 00:18:03.580 }, 00:18:03.580 { 00:18:03.580 "subsystem": "iobuf", 00:18:03.580 "config": [ 00:18:03.580 { 00:18:03.580 "method": "iobuf_set_options", 00:18:03.580 "params": { 00:18:03.580 "small_pool_count": 8192, 00:18:03.580 "large_pool_count": 1024, 00:18:03.580 "small_bufsize": 8192, 00:18:03.580 "large_bufsize": 135168 00:18:03.580 } 00:18:03.580 } 00:18:03.580 ] 00:18:03.580 }, 00:18:03.580 { 00:18:03.580 "subsystem": "sock", 00:18:03.580 "config": [ 00:18:03.580 { 00:18:03.580 "method": "sock_set_default_impl", 00:18:03.580 "params": { 00:18:03.580 "impl_name": "posix" 00:18:03.580 } 00:18:03.580 }, 00:18:03.580 { 00:18:03.580 "method": "sock_impl_set_options", 00:18:03.580 "params": { 00:18:03.580 "impl_name": "ssl", 00:18:03.580 "recv_buf_size": 4096, 00:18:03.580 "send_buf_size": 4096, 00:18:03.580 "enable_recv_pipe": true, 00:18:03.580 "enable_quickack": false, 00:18:03.580 "enable_placement_id": 0, 00:18:03.580 "enable_zerocopy_send_server": true, 00:18:03.580 "enable_zerocopy_send_client": false, 00:18:03.580 "zerocopy_threshold": 0, 00:18:03.580 "tls_version": 0, 00:18:03.580 "enable_ktls": false 00:18:03.580 } 00:18:03.580 }, 00:18:03.580 { 00:18:03.580 "method": "sock_impl_set_options", 00:18:03.580 "params": { 00:18:03.580 "impl_name": "posix", 00:18:03.580 "recv_buf_size": 2097152, 00:18:03.580 "send_buf_size": 2097152, 00:18:03.580 "enable_recv_pipe": true, 00:18:03.580 "enable_quickack": false, 00:18:03.580 "enable_placement_id": 0, 00:18:03.580 "enable_zerocopy_send_server": true, 00:18:03.580 "enable_zerocopy_send_client": false, 00:18:03.580 "zerocopy_threshold": 0, 00:18:03.580 "tls_version": 0, 00:18:03.580 "enable_ktls": false 00:18:03.580 } 00:18:03.580 } 00:18:03.580 ] 00:18:03.580 }, 00:18:03.580 { 00:18:03.580 "subsystem": "vmd", 00:18:03.580 "config": [] 00:18:03.580 }, 00:18:03.580 { 00:18:03.580 "subsystem": "accel", 00:18:03.580 "config": [ 00:18:03.580 { 00:18:03.580 "method": "accel_set_options", 00:18:03.580 "params": { 00:18:03.580 "small_cache_size": 128, 00:18:03.580 "large_cache_size": 16, 00:18:03.580 "task_count": 2048, 00:18:03.580 "sequence_count": 2048, 00:18:03.580 "buf_count": 2048 00:18:03.580 } 00:18:03.580 } 00:18:03.580 ] 00:18:03.580 }, 00:18:03.580 { 00:18:03.580 "subsystem": "bdev", 00:18:03.580 "config": [ 00:18:03.580 { 00:18:03.580 "method": "bdev_set_options", 00:18:03.580 "params": { 00:18:03.580 "bdev_io_pool_size": 65535, 00:18:03.580 "bdev_io_cache_size": 256, 00:18:03.580 "bdev_auto_examine": true, 00:18:03.580 "iobuf_small_cache_size": 128, 00:18:03.580 "iobuf_large_cache_size": 16 00:18:03.580 } 00:18:03.580 }, 00:18:03.580 { 00:18:03.580 "method": "bdev_raid_set_options", 00:18:03.580 "params": { 00:18:03.580 "process_window_size_kb": 1024 00:18:03.580 } 00:18:03.580 }, 00:18:03.580 { 00:18:03.580 "method": "bdev_iscsi_set_options", 00:18:03.580 "params": { 00:18:03.580 "timeout_sec": 30 00:18:03.580 } 00:18:03.580 }, 00:18:03.580 { 00:18:03.580 "method": "bdev_nvme_set_options", 00:18:03.580 "params": { 00:18:03.580 "action_on_timeout": "none", 00:18:03.580 "timeout_us": 0, 00:18:03.580 "timeout_admin_us": 0, 00:18:03.580 "keep_alive_timeout_ms": 10000, 00:18:03.580 "arbitration_burst": 0, 00:18:03.580 "low_priority_weight": 0, 00:18:03.580 "medium_priority_weight": 0, 00:18:03.580 "high_priority_weight": 0, 00:18:03.580 "nvme_adminq_poll_period_us": 10000, 00:18:03.580 "nvme_ioq_poll_period_us": 0, 00:18:03.580 "io_queue_requests": 0, 00:18:03.580 "delay_cmd_submit": true, 00:18:03.580 "transport_retry_count": 4, 00:18:03.580 "bdev_retry_count": 3, 00:18:03.580 "transport_ack_timeout": 0, 00:18:03.580 "ctrlr_loss_timeout_sec": 0, 00:18:03.580 "reconnect_delay_sec": 0, 00:18:03.580 "fast_io_fail_timeout_sec": 0, 00:18:03.580 "disable_auto_failback": false, 00:18:03.580 "generate_uuids": false, 00:18:03.580 "transport_tos": 0, 00:18:03.580 "nvme_error_stat": false, 00:18:03.580 "rdma_srq_size": 0, 00:18:03.580 "io_path_stat": false, 00:18:03.580 "allow_accel_sequence": false, 00:18:03.580 "rdma_max_cq_size": 0, 00:18:03.580 "rdma_cm_event_timeout_ms": 0, 00:18:03.580 "dhchap_digests": [ 00:18:03.580 "sha256", 00:18:03.580 "sha384", 00:18:03.580 "sha512" 00:18:03.580 ], 00:18:03.580 "dhchap_dhgroups": [ 00:18:03.580 "null", 00:18:03.580 "ffdhe2048", 00:18:03.580 "ffdhe3072", 00:18:03.580 "ffdhe4096", 00:18:03.580 "ffdhe6144", 00:18:03.580 "ffdhe8192" 00:18:03.580 ] 00:18:03.580 } 00:18:03.580 }, 00:18:03.580 { 00:18:03.580 "method": "bdev_nvme_set_hotplug", 00:18:03.580 "params": { 00:18:03.580 "period_us": 100000, 00:18:03.580 "enable": false 00:18:03.580 } 00:18:03.580 }, 00:18:03.580 { 00:18:03.580 "method": "bdev_malloc_create", 00:18:03.580 "params": { 00:18:03.580 "name": "malloc0", 00:18:03.580 "num_blocks": 8192, 00:18:03.580 "block_size": 4096, 00:18:03.580 "physical_block_size": 4096, 00:18:03.580 "uuid": "0174bc02-6044-4c9f-a14a-d593e0e6ef2f", 00:18:03.580 "optimal_io_boundary": 0 00:18:03.580 } 00:18:03.580 }, 00:18:03.580 { 00:18:03.580 "method": "bdev_wait_for_examine" 00:18:03.580 } 00:18:03.580 ] 00:18:03.580 }, 00:18:03.580 { 00:18:03.580 "subsystem": "nbd", 00:18:03.580 "config": [] 00:18:03.580 }, 00:18:03.580 { 00:18:03.580 "subsystem": "scheduler", 00:18:03.580 "config": [ 00:18:03.580 { 00:18:03.580 "method": "framework_set_scheduler", 00:18:03.580 "params": { 00:18:03.580 "name": "static" 00:18:03.580 } 00:18:03.580 } 00:18:03.580 ] 00:18:03.580 }, 00:18:03.580 { 00:18:03.580 "subsystem": "nvmf", 00:18:03.580 "config": [ 00:18:03.580 { 00:18:03.580 "method": "nvmf_set_config", 00:18:03.580 "params": { 00:18:03.580 "discovery_filter": "match_any", 00:18:03.580 "admin_cmd_passthru": { 00:18:03.580 "identify_ctrlr": false 00:18:03.580 } 00:18:03.580 } 00:18:03.580 }, 00:18:03.580 { 00:18:03.580 "method": "nvmf_set_max_subsystems", 00:18:03.580 "params": { 00:18:03.580 "max_subsystems": 1024 00:18:03.580 } 00:18:03.580 }, 00:18:03.580 { 00:18:03.580 "method": "nvmf_set_crdt", 00:18:03.580 "params": { 00:18:03.580 "crdt1": 0, 00:18:03.580 "crdt2": 0, 00:18:03.580 "crdt3": 0 00:18:03.580 } 00:18:03.580 }, 00:18:03.580 { 00:18:03.580 "method": "nvmf_create_transport", 00:18:03.580 "params": { 00:18:03.580 "trtype": "TCP", 00:18:03.580 "max_queue_depth": 128, 00:18:03.580 "max_io_qpairs_per_ctrlr": 127, 00:18:03.580 "in_capsule_data_size": 4096, 00:18:03.580 "max_io_size": 131072, 00:18:03.580 "io_unit_size": 131072, 00:18:03.580 "max_aq_depth": 128, 00:18:03.580 "num_shared_buffers": 511, 00:18:03.580 "buf_cache_size": 4294967295, 00:18:03.580 "dif_insert_or_strip": false, 00:18:03.580 "zcopy": false, 00:18:03.580 "c2h_success": false, 00:18:03.580 "sock_priority": 0, 00:18:03.580 "abort_timeout_sec": 1, 00:18:03.580 "ack_timeout": 0, 00:18:03.580 "data_wr_pool_size": 0 00:18:03.580 } 00:18:03.580 }, 00:18:03.580 { 00:18:03.580 "method": "nvmf_create_subsystem", 00:18:03.580 "params": { 00:18:03.580 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.580 "allow_any_host": false, 00:18:03.580 "serial_number": "00000000000000000000", 00:18:03.580 "model_number": "SPDK bdev Controller", 00:18:03.580 "max_namespaces": 32, 00:18:03.580 "min_cntlid": 1, 00:18:03.580 "max_cntlid": 65519, 00:18:03.580 "ana_reporting": false 00:18:03.580 } 00:18:03.580 }, 00:18:03.580 { 00:18:03.580 "method": "nvmf_subsystem_add_host", 00:18:03.580 "params": { 00:18:03.580 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.580 "host": "nqn.2016-06.io.spdk:host1", 00:18:03.580 "psk": "key0" 00:18:03.580 } 00:18:03.580 }, 00:18:03.580 { 00:18:03.580 "method": "nvmf_subsystem_add_ns", 00:18:03.580 "params": { 00:18:03.580 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.580 "namespace": { 00:18:03.580 "nsid": 1, 00:18:03.580 "bdev_name": "malloc0", 00:18:03.580 "nguid": "0174BC0260444C9FA14AD593E0E6EF2F", 00:18:03.580 "uuid": "0174bc02-6044-4c9f-a14a-d593e0e6ef2f", 00:18:03.580 "no_auto_visible": false 00:18:03.580 } 00:18:03.580 } 00:18:03.580 }, 00:18:03.580 { 00:18:03.580 "method": "nvmf_subsystem_add_listener", 00:18:03.580 "params": { 00:18:03.580 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.580 "listen_address": { 00:18:03.580 "trtype": "TCP", 00:18:03.580 "adrfam": "IPv4", 00:18:03.580 "traddr": "10.0.0.2", 00:18:03.580 "trsvcid": "4420" 00:18:03.580 }, 00:18:03.580 "secure_channel": true 00:18:03.580 } 00:18:03.580 } 00:18:03.580 ] 00:18:03.580 } 00:18:03.580 ] 00:18:03.580 }' 00:18:03.580 11:22:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:03.580 11:22:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:03.580 11:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=600341 00:18:03.580 11:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:03.580 11:22:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 600341 00:18:03.580 11:22:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 600341 ']' 00:18:03.580 11:22:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.580 11:22:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:03.580 11:22:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.580 11:22:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:03.580 11:22:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:03.580 [2024-07-12 11:22:29.622275] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:18:03.580 [2024-07-12 11:22:29.622381] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:03.580 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.580 [2024-07-12 11:22:29.685316] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.838 [2024-07-12 11:22:29.784904] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:03.838 [2024-07-12 11:22:29.784967] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:03.838 [2024-07-12 11:22:29.784991] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:03.838 [2024-07-12 11:22:29.785002] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:03.838 [2024-07-12 11:22:29.785012] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:03.838 [2024-07-12 11:22:29.785090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.095 [2024-07-12 11:22:30.021191] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:04.095 [2024-07-12 11:22:30.053192] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:04.095 [2024-07-12 11:22:30.068003] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:04.661 11:22:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:04.661 11:22:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:04.661 11:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:04.661 11:22:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:04.661 11:22:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.661 11:22:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:04.661 11:22:30 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=600492 00:18:04.661 11:22:30 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 600492 /var/tmp/bdevperf.sock 00:18:04.661 11:22:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 600492 ']' 00:18:04.661 11:22:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:04.661 11:22:30 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:04.661 11:22:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:04.661 11:22:30 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:18:04.661 "subsystems": [ 00:18:04.661 { 00:18:04.661 "subsystem": "keyring", 00:18:04.661 "config": [ 00:18:04.661 { 00:18:04.661 "method": "keyring_file_add_key", 00:18:04.661 "params": { 00:18:04.661 "name": "key0", 00:18:04.661 "path": "/tmp/tmp.qljnWVlTvm" 00:18:04.661 } 00:18:04.661 } 00:18:04.661 ] 00:18:04.661 }, 00:18:04.661 { 00:18:04.661 "subsystem": "iobuf", 00:18:04.661 "config": [ 00:18:04.661 { 00:18:04.661 "method": "iobuf_set_options", 00:18:04.661 "params": { 00:18:04.661 "small_pool_count": 8192, 00:18:04.661 "large_pool_count": 1024, 00:18:04.661 "small_bufsize": 8192, 00:18:04.661 "large_bufsize": 135168 00:18:04.661 } 00:18:04.661 } 00:18:04.661 ] 00:18:04.661 }, 00:18:04.661 { 00:18:04.661 "subsystem": "sock", 00:18:04.661 "config": [ 00:18:04.661 { 00:18:04.661 "method": "sock_set_default_impl", 00:18:04.661 "params": { 00:18:04.661 "impl_name": "posix" 00:18:04.661 } 00:18:04.661 }, 00:18:04.661 { 00:18:04.661 "method": "sock_impl_set_options", 00:18:04.661 "params": { 00:18:04.661 "impl_name": "ssl", 00:18:04.661 "recv_buf_size": 4096, 00:18:04.661 "send_buf_size": 4096, 00:18:04.661 "enable_recv_pipe": true, 00:18:04.661 "enable_quickack": false, 00:18:04.661 "enable_placement_id": 0, 00:18:04.661 "enable_zerocopy_send_server": true, 00:18:04.661 "enable_zerocopy_send_client": false, 00:18:04.661 "zerocopy_threshold": 0, 00:18:04.661 "tls_version": 0, 00:18:04.661 "enable_ktls": false 00:18:04.661 } 00:18:04.661 }, 00:18:04.661 { 00:18:04.661 "method": "sock_impl_set_options", 00:18:04.661 "params": { 00:18:04.661 "impl_name": "posix", 00:18:04.661 "recv_buf_size": 2097152, 00:18:04.661 "send_buf_size": 2097152, 00:18:04.661 "enable_recv_pipe": true, 00:18:04.661 "enable_quickack": false, 00:18:04.661 "enable_placement_id": 0, 00:18:04.661 "enable_zerocopy_send_server": true, 00:18:04.661 "enable_zerocopy_send_client": false, 00:18:04.661 "zerocopy_threshold": 0, 00:18:04.661 "tls_version": 0, 00:18:04.661 "enable_ktls": false 00:18:04.661 } 00:18:04.661 } 00:18:04.661 ] 00:18:04.661 }, 00:18:04.661 { 00:18:04.661 "subsystem": "vmd", 00:18:04.661 "config": [] 00:18:04.661 }, 00:18:04.661 { 00:18:04.661 "subsystem": "accel", 00:18:04.661 "config": [ 00:18:04.661 { 00:18:04.661 "method": "accel_set_options", 00:18:04.661 "params": { 00:18:04.661 "small_cache_size": 128, 00:18:04.661 "large_cache_size": 16, 00:18:04.661 "task_count": 2048, 00:18:04.661 "sequence_count": 2048, 00:18:04.661 "buf_count": 2048 00:18:04.661 } 00:18:04.661 } 00:18:04.661 ] 00:18:04.661 }, 00:18:04.661 { 00:18:04.661 "subsystem": "bdev", 00:18:04.661 "config": [ 00:18:04.661 { 00:18:04.661 "method": "bdev_set_options", 00:18:04.661 "params": { 00:18:04.661 "bdev_io_pool_size": 65535, 00:18:04.661 "bdev_io_cache_size": 256, 00:18:04.661 "bdev_auto_examine": true, 00:18:04.661 "iobuf_small_cache_size": 128, 00:18:04.661 "iobuf_large_cache_size": 16 00:18:04.661 } 00:18:04.661 }, 00:18:04.661 { 00:18:04.661 "method": "bdev_raid_set_options", 00:18:04.661 "params": { 00:18:04.661 "process_window_size_kb": 1024 00:18:04.661 } 00:18:04.661 }, 00:18:04.661 { 00:18:04.661 "method": "bdev_iscsi_set_options", 00:18:04.661 "params": { 00:18:04.661 "timeout_sec": 30 00:18:04.661 } 00:18:04.661 }, 00:18:04.661 { 00:18:04.661 "method": "bdev_nvme_set_options", 00:18:04.661 "params": { 00:18:04.661 "action_on_timeout": "none", 00:18:04.661 "timeout_us": 0, 00:18:04.661 "timeout_admin_us": 0, 00:18:04.661 "keep_alive_timeout_ms": 10000, 00:18:04.662 "arbitration_burst": 0, 00:18:04.662 "low_priority_weight": 0, 00:18:04.662 "medium_priority_weight": 0, 00:18:04.662 "high_priority_weight": 0, 00:18:04.662 "nvme_adminq_poll_period_us": 10000, 00:18:04.662 "nvme_ioq_poll_period_us": 0, 00:18:04.662 "io_queue_requests": 512, 00:18:04.662 "delay_cmd_submit": true, 00:18:04.662 "transport_retry_count": 4, 00:18:04.662 "bdev_retry_count": 3, 00:18:04.662 "transport_ack_timeout": 0, 00:18:04.662 "ctrlr_loss_timeout_sec": 0, 00:18:04.662 "reconnect_delay_sec": 0, 00:18:04.662 "fast_io_fail_timeout_sec": 0, 00:18:04.662 "disable_auto_failback": false, 00:18:04.662 "generate_uuids": false, 00:18:04.662 "transport_tos": 0, 00:18:04.662 "nvme_error_stat": false, 00:18:04.662 "rdma_srq_size": 0, 00:18:04.662 "io_path_stat": false, 00:18:04.662 "allow_accel_sequence": false, 00:18:04.662 "rdma_max_cq_size": 0, 00:18:04.662 "rdma_cm_event_timeout_ms": 0, 00:18:04.662 "dhchap_digests": [ 00:18:04.662 "sha256", 00:18:04.662 "sha384", 00:18:04.662 "sha512" 00:18:04.662 ], 00:18:04.662 "dhchap_dhgroups": [ 00:18:04.662 "null", 00:18:04.662 "ffdhe2048", 00:18:04.662 "ffdhe3072", 00:18:04.662 "ffdhe4096", 00:18:04.662 "ffdhe6144", 00:18:04.662 "ffdhe8192" 00:18:04.662 ] 00:18:04.662 } 00:18:04.662 }, 00:18:04.662 { 00:18:04.662 "method": "bdev_nvme_attach_controller", 00:18:04.662 "params": { 00:18:04.662 "name": "nvme0", 00:18:04.662 "trtype": "TCP", 00:18:04.662 "adrfam": "IPv4", 00:18:04.662 "traddr": "10.0.0.2", 00:18:04.662 "trsvcid": "4420", 00:18:04.662 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.662 "prchk_reftag": false, 00:18:04.662 "prchk_guard": false, 00:18:04.662 "ctrlr_loss_timeout_sec": 0, 00:18:04.662 "reconnect_delay_sec": 0, 00:18:04.662 "fast_io_fail_timeout_sec": 0, 00:18:04.662 "psk": "key0", 00:18:04.662 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:04.662 "hdgst": false, 00:18:04.662 "ddgst": false 00:18:04.662 } 00:18:04.662 }, 00:18:04.662 { 00:18:04.662 "method": "bdev_nvme_set_hotplug", 00:18:04.662 "params": { 00:18:04.662 "period_us": 100000, 00:18:04.662 "enable": false 00:18:04.662 } 00:18:04.662 }, 00:18:04.662 { 00:18:04.662 "method": "bdev_enable_histogram", 00:18:04.662 "params": { 00:18:04.662 "name": "nvme0n1", 00:18:04.662 "enable": true 00:18:04.662 } 00:18:04.662 }, 00:18:04.662 { 00:18:04.662 "method": "bdev_wait_for_examine" 00:18:04.662 } 00:18:04.662 ] 00:18:04.662 }, 00:18:04.662 { 00:18:04.662 "subsystem": "nbd", 00:18:04.662 "config": [] 00:18:04.662 } 00:18:04.662 ] 00:18:04.662 }' 00:18:04.662 11:22:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:04.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:04.662 11:22:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:04.662 11:22:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.662 [2024-07-12 11:22:30.676473] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:18:04.662 [2024-07-12 11:22:30.676555] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid600492 ] 00:18:04.662 EAL: No free 2048 kB hugepages reported on node 1 00:18:04.662 [2024-07-12 11:22:30.738980] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.919 [2024-07-12 11:22:30.852621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:04.919 [2024-07-12 11:22:31.027104] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:05.856 11:22:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:05.856 11:22:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:05.856 11:22:31 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:05.856 11:22:31 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:18:05.856 11:22:31 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.856 11:22:31 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:06.114 Running I/O for 1 seconds... 00:18:07.046 00:18:07.046 Latency(us) 00:18:07.046 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.046 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:07.046 Verification LBA range: start 0x0 length 0x2000 00:18:07.046 nvme0n1 : 1.02 3275.20 12.79 0.00 0.00 38591.32 7281.78 64856.37 00:18:07.046 =================================================================================================================== 00:18:07.046 Total : 3275.20 12.79 0.00 0.00 38591.32 7281.78 64856.37 00:18:07.046 0 00:18:07.046 11:22:33 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:18:07.046 11:22:33 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:18:07.046 11:22:33 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:07.046 11:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:18:07.046 11:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:18:07.046 11:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:07.046 11:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:07.046 11:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:07.046 11:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:07.046 11:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:07.046 11:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:07.046 nvmf_trace.0 00:18:07.046 11:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:18:07.046 11:22:33 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 600492 00:18:07.046 11:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 600492 ']' 00:18:07.046 11:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 600492 00:18:07.046 11:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:07.046 11:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:07.046 11:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 600492 00:18:07.046 11:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:07.046 11:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:07.046 11:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 600492' 00:18:07.046 killing process with pid 600492 00:18:07.046 11:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 600492 00:18:07.046 Received shutdown signal, test time was about 1.000000 seconds 00:18:07.046 00:18:07.046 Latency(us) 00:18:07.046 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.046 =================================================================================================================== 00:18:07.046 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:07.046 11:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 600492 00:18:07.304 11:22:33 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:07.304 11:22:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:07.304 11:22:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:18:07.304 11:22:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:07.304 11:22:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:18:07.304 11:22:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:07.304 11:22:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:07.304 rmmod nvme_tcp 00:18:07.304 rmmod nvme_fabrics 00:18:07.562 rmmod nvme_keyring 00:18:07.562 11:22:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:07.562 11:22:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:18:07.562 11:22:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:18:07.562 11:22:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 600341 ']' 00:18:07.562 11:22:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 600341 00:18:07.562 11:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 600341 ']' 00:18:07.562 11:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 600341 00:18:07.562 11:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:07.562 11:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:07.562 11:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 600341 00:18:07.562 11:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:07.562 11:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:07.562 11:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 600341' 00:18:07.562 killing process with pid 600341 00:18:07.562 11:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 600341 00:18:07.562 11:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 600341 00:18:07.822 11:22:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:07.822 11:22:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:07.822 11:22:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:07.822 11:22:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:07.822 11:22:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:07.822 11:22:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.822 11:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:07.822 11:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.732 11:22:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:09.732 11:22:35 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.S25tEOOl2q /tmp/tmp.B0ypxocy0a /tmp/tmp.qljnWVlTvm 00:18:09.732 00:18:09.732 real 1m20.207s 00:18:09.732 user 2m11.643s 00:18:09.732 sys 0m24.463s 00:18:09.732 11:22:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:09.732 11:22:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:09.732 ************************************ 00:18:09.732 END TEST nvmf_tls 00:18:09.732 ************************************ 00:18:09.732 11:22:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:09.732 11:22:35 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:09.732 11:22:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:09.732 11:22:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:09.732 11:22:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:09.732 ************************************ 00:18:09.732 START TEST nvmf_fips 00:18:09.732 ************************************ 00:18:09.732 11:22:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:09.992 * Looking for test storage... 00:18:09.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:18:09.992 11:22:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:09.993 11:22:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:09.993 11:22:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:09.993 11:22:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:09.993 11:22:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:18:09.993 11:22:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:18:09.993 11:22:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:09.993 11:22:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:18:09.993 11:22:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:09.993 11:22:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:18:09.993 11:22:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:09.993 11:22:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:18:09.993 11:22:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:09.993 11:22:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:18:09.993 11:22:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:18:09.993 11:22:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:18:09.993 Error setting digest 00:18:09.993 0022A71EC27F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:18:09.993 0022A71EC27F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:18:09.993 11:22:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:18:09.993 11:22:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:09.993 11:22:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:09.993 11:22:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:09.993 11:22:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:18:09.993 11:22:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:09.993 11:22:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:09.993 11:22:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:09.993 11:22:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:09.993 11:22:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:09.993 11:22:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.993 11:22:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:09.993 11:22:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.993 11:22:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:09.993 11:22:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:09.993 11:22:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:18:09.993 11:22:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:11.893 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:11.893 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:18:11.893 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:11.893 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:11.893 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:11.893 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:11.893 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:11.893 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:18:11.893 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:11.893 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:18:11.893 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:18:11.893 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:18:11.893 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:18:11.893 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:18:11.893 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:18:11.893 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:11.893 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:11.893 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:11.893 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:11.893 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:11.893 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:11.893 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:11.893 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:11.893 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:11.893 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:11.893 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:11.893 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:11.893 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:11.893 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:11.893 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:11.893 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:11.893 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:11.893 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:11.893 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:11.893 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:11.893 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:11.893 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:11.893 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:11.893 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:11.894 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:11.894 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:11.894 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:11.894 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:12.154 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:12.154 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:12.154 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:12.154 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:12.154 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:12.154 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:12.154 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:12.154 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:12.154 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:12.154 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:12.154 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:18:12.154 00:18:12.154 --- 10.0.0.2 ping statistics --- 00:18:12.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:12.154 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:18:12.154 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:12.154 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:12.154 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:18:12.154 00:18:12.154 --- 10.0.0.1 ping statistics --- 00:18:12.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:12.154 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:18:12.154 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:12.154 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:18:12.154 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:12.154 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:12.154 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:12.154 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:12.154 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:12.154 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:12.154 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:12.154 11:22:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:12.154 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:12.154 11:22:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:12.154 11:22:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:12.154 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=602849 00:18:12.154 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:12.154 11:22:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 602849 00:18:12.154 11:22:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 602849 ']' 00:18:12.154 11:22:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:12.154 11:22:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:12.154 11:22:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:12.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:12.154 11:22:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:12.154 11:22:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:12.154 [2024-07-12 11:22:38.247499] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:18:12.154 [2024-07-12 11:22:38.247585] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:12.154 EAL: No free 2048 kB hugepages reported on node 1 00:18:12.412 [2024-07-12 11:22:38.311980] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.412 [2024-07-12 11:22:38.426950] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:12.412 [2024-07-12 11:22:38.427011] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:12.412 [2024-07-12 11:22:38.427026] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:12.412 [2024-07-12 11:22:38.427038] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:12.412 [2024-07-12 11:22:38.427048] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:12.412 [2024-07-12 11:22:38.427074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:13.346 11:22:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:13.346 11:22:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:18:13.346 11:22:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:13.346 11:22:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:13.346 11:22:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:13.346 11:22:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:13.346 11:22:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:13.346 11:22:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:13.346 11:22:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:13.346 11:22:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:13.346 11:22:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:13.346 11:22:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:13.346 11:22:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:13.346 11:22:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:13.346 [2024-07-12 11:22:39.441685] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:13.346 [2024-07-12 11:22:39.457687] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:13.346 [2024-07-12 11:22:39.457903] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:13.604 [2024-07-12 11:22:39.489251] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:13.604 malloc0 00:18:13.604 11:22:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:13.604 11:22:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=603009 00:18:13.604 11:22:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:13.604 11:22:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 603009 /var/tmp/bdevperf.sock 00:18:13.604 11:22:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 603009 ']' 00:18:13.604 11:22:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:13.604 11:22:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:13.604 11:22:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:13.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:13.604 11:22:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:13.604 11:22:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:13.604 [2024-07-12 11:22:39.581039] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:18:13.604 [2024-07-12 11:22:39.581132] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid603009 ] 00:18:13.604 EAL: No free 2048 kB hugepages reported on node 1 00:18:13.604 [2024-07-12 11:22:39.638738] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.896 [2024-07-12 11:22:39.746807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:14.461 11:22:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:14.461 11:22:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:18:14.461 11:22:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:14.735 [2024-07-12 11:22:40.810400] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:14.735 [2024-07-12 11:22:40.810527] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:14.992 TLSTESTn1 00:18:14.992 11:22:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:14.992 Running I/O for 10 seconds... 00:18:24.983 00:18:24.983 Latency(us) 00:18:24.983 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.983 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:24.983 Verification LBA range: start 0x0 length 0x2000 00:18:24.983 TLSTESTn1 : 10.02 3493.67 13.65 0.00 0.00 36575.46 7815.77 28544.57 00:18:24.983 =================================================================================================================== 00:18:24.983 Total : 3493.67 13.65 0.00 0.00 36575.46 7815.77 28544.57 00:18:24.983 0 00:18:24.983 11:22:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:18:24.983 11:22:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:18:24.983 11:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:18:24.983 11:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:18:24.983 11:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:24.983 11:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:24.983 11:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:24.983 11:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:24.983 11:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:24.983 11:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:24.983 nvmf_trace.0 00:18:25.241 11:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:18:25.241 11:22:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 603009 00:18:25.241 11:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 603009 ']' 00:18:25.241 11:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 603009 00:18:25.241 11:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:18:25.241 11:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:25.241 11:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 603009 00:18:25.241 11:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:25.241 11:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:25.241 11:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 603009' 00:18:25.241 killing process with pid 603009 00:18:25.241 11:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 603009 00:18:25.241 Received shutdown signal, test time was about 10.000000 seconds 00:18:25.241 00:18:25.241 Latency(us) 00:18:25.241 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.241 =================================================================================================================== 00:18:25.241 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:25.241 [2024-07-12 11:22:51.172261] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:25.241 11:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 603009 00:18:25.499 11:22:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:18:25.499 11:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:25.499 11:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:18:25.499 11:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:25.499 11:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:18:25.499 11:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:25.499 11:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:25.499 rmmod nvme_tcp 00:18:25.499 rmmod nvme_fabrics 00:18:25.499 rmmod nvme_keyring 00:18:25.499 11:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:25.499 11:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:18:25.499 11:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:18:25.499 11:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 602849 ']' 00:18:25.499 11:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 602849 00:18:25.499 11:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 602849 ']' 00:18:25.499 11:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 602849 00:18:25.499 11:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:18:25.499 11:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:25.499 11:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 602849 00:18:25.499 11:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:25.499 11:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:25.499 11:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 602849' 00:18:25.499 killing process with pid 602849 00:18:25.499 11:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 602849 00:18:25.499 [2024-07-12 11:22:51.527248] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:25.499 11:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 602849 00:18:25.759 11:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:25.759 11:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:25.760 11:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:25.760 11:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:25.760 11:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:25.760 11:22:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.760 11:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:25.760 11:22:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:28.289 11:22:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:28.289 11:22:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:28.289 00:18:28.289 real 0m18.014s 00:18:28.289 user 0m24.137s 00:18:28.289 sys 0m5.405s 00:18:28.289 11:22:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:28.289 11:22:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:28.289 ************************************ 00:18:28.289 END TEST nvmf_fips 00:18:28.289 ************************************ 00:18:28.289 11:22:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:28.289 11:22:53 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:18:28.289 11:22:53 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:18:28.289 11:22:53 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:18:28.289 11:22:53 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:18:28.289 11:22:53 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:18:28.289 11:22:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:30.188 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:30.188 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:30.188 11:22:55 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:30.189 11:22:55 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:30.189 11:22:55 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:30.189 11:22:55 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:30.189 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:30.189 11:22:55 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:30.189 11:22:55 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:30.189 11:22:55 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:30.189 11:22:55 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:30.189 11:22:55 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:30.189 11:22:55 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:30.189 11:22:55 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:30.189 11:22:55 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:30.189 11:22:55 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:30.189 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:30.189 11:22:55 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:30.189 11:22:55 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:30.189 11:22:55 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:30.189 11:22:55 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:18:30.189 11:22:55 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:30.189 11:22:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:30.189 11:22:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:30.189 11:22:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:30.189 ************************************ 00:18:30.189 START TEST nvmf_perf_adq 00:18:30.189 ************************************ 00:18:30.189 11:22:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:30.189 * Looking for test storage... 00:18:30.189 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:30.189 11:22:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:30.189 11:22:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:18:30.189 11:22:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:30.189 11:22:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:30.189 11:22:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:30.189 11:22:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:30.189 11:22:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:30.189 11:22:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:30.189 11:22:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:30.189 11:22:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:30.189 11:22:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:30.189 11:22:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:30.189 11:22:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:30.189 11:22:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:30.189 11:22:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:30.189 11:22:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:30.189 11:22:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:30.189 11:22:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:30.189 11:22:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:30.189 11:22:56 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:30.189 11:22:56 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:30.189 11:22:56 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:30.189 11:22:56 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.189 11:22:56 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.189 11:22:56 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.189 11:22:56 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:18:30.189 11:22:56 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.189 11:22:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:18:30.189 11:22:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:30.189 11:22:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:30.189 11:22:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:30.189 11:22:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:30.189 11:22:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:30.189 11:22:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:30.189 11:22:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:30.189 11:22:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:30.189 11:22:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:18:30.189 11:22:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:18:30.189 11:22:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:32.090 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:32.090 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:18:32.090 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:32.090 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:32.090 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:32.090 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:32.090 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:32.090 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:18:32.090 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:32.090 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:18:32.090 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:18:32.090 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:18:32.090 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:18:32.090 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:18:32.090 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:18:32.090 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:32.090 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:32.090 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:32.090 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:32.090 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:32.090 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:32.090 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:32.090 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:32.090 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:32.090 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:32.090 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:32.090 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:32.090 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:32.090 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:32.090 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:32.090 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:32.090 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:32.090 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:32.090 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:32.090 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:32.090 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:32.090 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:32.091 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:32.091 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:32.091 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:32.091 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:32.091 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:32.091 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:32.091 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:32.091 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:32.091 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:32.091 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:32.091 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:32.091 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:32.091 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:32.091 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:32.091 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:32.091 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.091 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:32.091 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:32.091 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:32.091 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:32.091 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.091 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:32.091 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:32.091 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.091 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:32.091 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.091 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:32.091 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:32.091 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:32.091 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:32.091 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.091 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:32.091 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:32.091 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.091 11:22:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:32.091 11:22:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:32.091 11:22:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:18:32.091 11:22:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:18:32.091 11:22:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:18:32.091 11:22:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:18:32.662 11:22:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:18:34.560 11:23:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:39.848 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:39.848 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:39.848 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:39.848 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:39.848 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:39.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:39.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:18:39.848 00:18:39.848 --- 10.0.0.2 ping statistics --- 00:18:39.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.849 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:18:39.849 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:39.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:39.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:18:39.849 00:18:39.849 --- 10.0.0.1 ping statistics --- 00:18:39.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.849 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:18:39.849 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:39.849 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:18:39.849 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:39.849 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:39.849 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:39.849 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:39.849 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:39.849 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:39.849 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:39.849 11:23:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:39.849 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:39.849 11:23:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:39.849 11:23:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:39.849 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=608884 00:18:39.849 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:39.849 11:23:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 608884 00:18:39.849 11:23:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 608884 ']' 00:18:39.849 11:23:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.849 11:23:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:39.849 11:23:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.849 11:23:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:39.849 11:23:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:39.849 [2024-07-12 11:23:05.801941] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:18:39.849 [2024-07-12 11:23:05.802014] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:39.849 EAL: No free 2048 kB hugepages reported on node 1 00:18:39.849 [2024-07-12 11:23:05.863647] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:39.849 [2024-07-12 11:23:05.969123] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:39.849 [2024-07-12 11:23:05.969197] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:39.849 [2024-07-12 11:23:05.969211] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:39.849 [2024-07-12 11:23:05.969222] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:39.849 [2024-07-12 11:23:05.969231] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:39.849 [2024-07-12 11:23:05.969314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.849 [2024-07-12 11:23:05.969957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:39.849 [2024-07-12 11:23:05.969983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:39.849 [2024-07-12 11:23:05.969986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.107 11:23:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:40.107 11:23:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:18:40.107 11:23:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:40.107 11:23:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:40.107 11:23:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:40.107 11:23:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:40.107 11:23:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:18:40.107 11:23:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:18:40.107 11:23:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:18:40.107 11:23:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.107 11:23:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:40.107 11:23:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.107 11:23:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:18:40.107 11:23:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:18:40.107 11:23:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.107 11:23:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:40.107 11:23:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.107 11:23:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:18:40.107 11:23:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.107 11:23:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:40.107 11:23:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.107 11:23:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:18:40.107 11:23:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.107 11:23:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:40.107 [2024-07-12 11:23:06.168508] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:40.107 11:23:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.107 11:23:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:40.107 11:23:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.107 11:23:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:40.107 Malloc1 00:18:40.107 11:23:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.107 11:23:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:40.107 11:23:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.107 11:23:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:40.107 11:23:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.107 11:23:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:40.107 11:23:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.107 11:23:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:40.107 11:23:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.107 11:23:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:40.107 11:23:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.107 11:23:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:40.108 [2024-07-12 11:23:06.219438] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:40.108 11:23:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.108 11:23:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=608914 00:18:40.108 11:23:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:40.108 11:23:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:18:40.366 EAL: No free 2048 kB hugepages reported on node 1 00:18:42.264 11:23:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:18:42.264 11:23:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.264 11:23:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:42.264 11:23:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.264 11:23:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:18:42.264 "tick_rate": 2700000000, 00:18:42.264 "poll_groups": [ 00:18:42.264 { 00:18:42.264 "name": "nvmf_tgt_poll_group_000", 00:18:42.264 "admin_qpairs": 1, 00:18:42.264 "io_qpairs": 1, 00:18:42.264 "current_admin_qpairs": 1, 00:18:42.264 "current_io_qpairs": 1, 00:18:42.264 "pending_bdev_io": 0, 00:18:42.264 "completed_nvme_io": 20558, 00:18:42.264 "transports": [ 00:18:42.264 { 00:18:42.264 "trtype": "TCP" 00:18:42.264 } 00:18:42.264 ] 00:18:42.264 }, 00:18:42.264 { 00:18:42.264 "name": "nvmf_tgt_poll_group_001", 00:18:42.264 "admin_qpairs": 0, 00:18:42.264 "io_qpairs": 1, 00:18:42.264 "current_admin_qpairs": 0, 00:18:42.264 "current_io_qpairs": 1, 00:18:42.264 "pending_bdev_io": 0, 00:18:42.264 "completed_nvme_io": 20241, 00:18:42.264 "transports": [ 00:18:42.264 { 00:18:42.264 "trtype": "TCP" 00:18:42.264 } 00:18:42.264 ] 00:18:42.264 }, 00:18:42.264 { 00:18:42.264 "name": "nvmf_tgt_poll_group_002", 00:18:42.264 "admin_qpairs": 0, 00:18:42.264 "io_qpairs": 1, 00:18:42.264 "current_admin_qpairs": 0, 00:18:42.264 "current_io_qpairs": 1, 00:18:42.264 "pending_bdev_io": 0, 00:18:42.264 "completed_nvme_io": 20564, 00:18:42.264 "transports": [ 00:18:42.264 { 00:18:42.264 "trtype": "TCP" 00:18:42.264 } 00:18:42.264 ] 00:18:42.264 }, 00:18:42.264 { 00:18:42.264 "name": "nvmf_tgt_poll_group_003", 00:18:42.264 "admin_qpairs": 0, 00:18:42.264 "io_qpairs": 1, 00:18:42.264 "current_admin_qpairs": 0, 00:18:42.264 "current_io_qpairs": 1, 00:18:42.264 "pending_bdev_io": 0, 00:18:42.264 "completed_nvme_io": 20554, 00:18:42.264 "transports": [ 00:18:42.264 { 00:18:42.264 "trtype": "TCP" 00:18:42.264 } 00:18:42.264 ] 00:18:42.264 } 00:18:42.264 ] 00:18:42.264 }' 00:18:42.264 11:23:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:18:42.264 11:23:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:18:42.264 11:23:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:18:42.264 11:23:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:18:42.264 11:23:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 608914 00:18:50.424 Initializing NVMe Controllers 00:18:50.424 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:50.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:18:50.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:18:50.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:18:50.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:18:50.424 Initialization complete. Launching workers. 00:18:50.424 ======================================================== 00:18:50.424 Latency(us) 00:18:50.424 Device Information : IOPS MiB/s Average min max 00:18:50.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10717.80 41.87 5973.29 2386.98 39872.64 00:18:50.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10634.50 41.54 6018.00 2210.64 10392.83 00:18:50.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10704.40 41.81 5980.58 1498.76 11232.78 00:18:50.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10726.10 41.90 5968.29 2568.13 9734.38 00:18:50.424 ======================================================== 00:18:50.424 Total : 42782.80 167.12 5984.97 1498.76 39872.64 00:18:50.424 00:18:50.424 11:23:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:18:50.424 11:23:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:50.424 11:23:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:18:50.424 11:23:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:50.424 11:23:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:18:50.424 11:23:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:50.424 11:23:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:50.424 rmmod nvme_tcp 00:18:50.424 rmmod nvme_fabrics 00:18:50.424 rmmod nvme_keyring 00:18:50.424 11:23:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:50.424 11:23:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:18:50.424 11:23:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:18:50.424 11:23:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 608884 ']' 00:18:50.424 11:23:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 608884 00:18:50.424 11:23:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 608884 ']' 00:18:50.424 11:23:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 608884 00:18:50.424 11:23:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:18:50.424 11:23:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:50.424 11:23:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 608884 00:18:50.424 11:23:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:50.424 11:23:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:50.424 11:23:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 608884' 00:18:50.424 killing process with pid 608884 00:18:50.424 11:23:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 608884 00:18:50.425 11:23:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 608884 00:18:50.683 11:23:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:50.683 11:23:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:50.683 11:23:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:50.683 11:23:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:50.683 11:23:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:50.683 11:23:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.683 11:23:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:50.683 11:23:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.221 11:23:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:53.221 11:23:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:18:53.221 11:23:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:18:53.480 11:23:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:18:55.377 11:23:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:00.646 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:00.646 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:00.646 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:00.647 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:00.647 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:00.647 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:00.647 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:19:00.647 00:19:00.647 --- 10.0.0.2 ping statistics --- 00:19:00.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.647 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:00.647 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:00.647 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:19:00.647 00:19:00.647 --- 10.0.0.1 ping statistics --- 00:19:00.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.647 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:19:00.647 net.core.busy_poll = 1 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:19:00.647 net.core.busy_read = 1 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=611537 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 611537 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 611537 ']' 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:00.647 11:23:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:00.647 [2024-07-12 11:23:26.751965] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:19:00.647 [2024-07-12 11:23:26.752062] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:00.927 EAL: No free 2048 kB hugepages reported on node 1 00:19:00.927 [2024-07-12 11:23:26.819073] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:00.927 [2024-07-12 11:23:26.926168] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:00.927 [2024-07-12 11:23:26.926237] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:00.927 [2024-07-12 11:23:26.926250] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:00.927 [2024-07-12 11:23:26.926276] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:00.927 [2024-07-12 11:23:26.926285] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:00.927 [2024-07-12 11:23:26.926362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:00.927 [2024-07-12 11:23:26.926428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:00.927 [2024-07-12 11:23:26.926497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:00.927 [2024-07-12 11:23:26.926500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.927 11:23:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:00.927 11:23:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:19:00.927 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:00.927 11:23:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:00.927 11:23:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:00.927 11:23:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:00.927 11:23:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:19:00.927 11:23:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:00.927 11:23:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:00.927 11:23:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.927 11:23:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:00.927 11:23:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.927 11:23:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:00.927 11:23:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:19:00.927 11:23:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.927 11:23:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:00.927 11:23:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.927 11:23:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:00.927 11:23:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.927 11:23:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:01.186 11:23:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.186 11:23:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:19:01.186 11:23:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.186 11:23:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:01.186 [2024-07-12 11:23:27.123440] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:01.186 11:23:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.186 11:23:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:01.186 11:23:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.186 11:23:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:01.186 Malloc1 00:19:01.186 11:23:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.186 11:23:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:01.186 11:23:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.186 11:23:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:01.186 11:23:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.186 11:23:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:01.186 11:23:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.186 11:23:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:01.186 11:23:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.186 11:23:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:01.186 11:23:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.186 11:23:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:01.186 [2024-07-12 11:23:27.174463] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:01.186 11:23:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.186 11:23:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=611683 00:19:01.186 11:23:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:01.186 11:23:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:19:01.186 EAL: No free 2048 kB hugepages reported on node 1 00:19:03.086 11:23:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:19:03.086 11:23:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.086 11:23:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:03.086 11:23:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.086 11:23:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:19:03.086 "tick_rate": 2700000000, 00:19:03.086 "poll_groups": [ 00:19:03.086 { 00:19:03.086 "name": "nvmf_tgt_poll_group_000", 00:19:03.086 "admin_qpairs": 1, 00:19:03.086 "io_qpairs": 2, 00:19:03.086 "current_admin_qpairs": 1, 00:19:03.086 "current_io_qpairs": 2, 00:19:03.086 "pending_bdev_io": 0, 00:19:03.086 "completed_nvme_io": 26025, 00:19:03.086 "transports": [ 00:19:03.086 { 00:19:03.086 "trtype": "TCP" 00:19:03.086 } 00:19:03.086 ] 00:19:03.086 }, 00:19:03.086 { 00:19:03.086 "name": "nvmf_tgt_poll_group_001", 00:19:03.087 "admin_qpairs": 0, 00:19:03.087 "io_qpairs": 2, 00:19:03.087 "current_admin_qpairs": 0, 00:19:03.087 "current_io_qpairs": 2, 00:19:03.087 "pending_bdev_io": 0, 00:19:03.087 "completed_nvme_io": 26762, 00:19:03.087 "transports": [ 00:19:03.087 { 00:19:03.087 "trtype": "TCP" 00:19:03.087 } 00:19:03.087 ] 00:19:03.087 }, 00:19:03.087 { 00:19:03.087 "name": "nvmf_tgt_poll_group_002", 00:19:03.087 "admin_qpairs": 0, 00:19:03.087 "io_qpairs": 0, 00:19:03.087 "current_admin_qpairs": 0, 00:19:03.087 "current_io_qpairs": 0, 00:19:03.087 "pending_bdev_io": 0, 00:19:03.087 "completed_nvme_io": 0, 00:19:03.087 "transports": [ 00:19:03.087 { 00:19:03.087 "trtype": "TCP" 00:19:03.087 } 00:19:03.087 ] 00:19:03.087 }, 00:19:03.087 { 00:19:03.087 "name": "nvmf_tgt_poll_group_003", 00:19:03.087 "admin_qpairs": 0, 00:19:03.087 "io_qpairs": 0, 00:19:03.087 "current_admin_qpairs": 0, 00:19:03.087 "current_io_qpairs": 0, 00:19:03.087 "pending_bdev_io": 0, 00:19:03.087 "completed_nvme_io": 0, 00:19:03.087 "transports": [ 00:19:03.087 { 00:19:03.087 "trtype": "TCP" 00:19:03.087 } 00:19:03.087 ] 00:19:03.087 } 00:19:03.087 ] 00:19:03.087 }' 00:19:03.087 11:23:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:19:03.087 11:23:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:19:03.345 11:23:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:19:03.345 11:23:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:19:03.345 11:23:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 611683 00:19:11.455 Initializing NVMe Controllers 00:19:11.455 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:11.455 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:11.455 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:11.455 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:11.455 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:11.455 Initialization complete. Launching workers. 00:19:11.455 ======================================================== 00:19:11.455 Latency(us) 00:19:11.455 Device Information : IOPS MiB/s Average min max 00:19:11.455 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7919.48 30.94 8107.47 1567.71 55597.07 00:19:11.455 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5849.84 22.85 10983.24 1734.26 54266.32 00:19:11.455 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7034.10 27.48 9111.35 1756.73 53627.16 00:19:11.455 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6818.51 26.63 9385.96 1713.18 54285.64 00:19:11.455 ======================================================== 00:19:11.455 Total : 27621.93 107.90 9287.75 1567.71 55597.07 00:19:11.455 00:19:11.455 11:23:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:19:11.455 11:23:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:11.455 11:23:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:11.455 11:23:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:11.455 11:23:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:11.455 11:23:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:11.455 11:23:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:11.455 rmmod nvme_tcp 00:19:11.455 rmmod nvme_fabrics 00:19:11.455 rmmod nvme_keyring 00:19:11.455 11:23:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:11.455 11:23:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:11.455 11:23:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:11.455 11:23:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 611537 ']' 00:19:11.456 11:23:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 611537 00:19:11.456 11:23:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 611537 ']' 00:19:11.456 11:23:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 611537 00:19:11.456 11:23:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:19:11.456 11:23:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:11.456 11:23:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 611537 00:19:11.456 11:23:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:11.456 11:23:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:11.456 11:23:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 611537' 00:19:11.456 killing process with pid 611537 00:19:11.456 11:23:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 611537 00:19:11.456 11:23:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 611537 00:19:11.714 11:23:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:11.714 11:23:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:11.714 11:23:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:11.714 11:23:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:11.714 11:23:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:11.714 11:23:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.714 11:23:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:11.714 11:23:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:13.617 11:23:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:13.877 11:23:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:19:13.877 00:19:13.877 real 0m43.799s 00:19:13.877 user 2m38.503s 00:19:13.877 sys 0m9.833s 00:19:13.877 11:23:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:13.877 11:23:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:13.877 ************************************ 00:19:13.877 END TEST nvmf_perf_adq 00:19:13.877 ************************************ 00:19:13.877 11:23:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:13.877 11:23:39 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:13.877 11:23:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:13.877 11:23:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:13.877 11:23:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:13.877 ************************************ 00:19:13.877 START TEST nvmf_shutdown 00:19:13.877 ************************************ 00:19:13.877 11:23:39 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:13.877 * Looking for test storage... 00:19:13.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:13.877 11:23:39 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:13.877 11:23:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:19:13.877 11:23:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:13.877 11:23:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:13.877 11:23:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:13.877 11:23:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:13.877 11:23:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:13.877 11:23:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:13.877 11:23:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:13.877 11:23:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:13.877 11:23:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:13.877 11:23:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:13.877 11:23:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:13.877 11:23:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:13.877 11:23:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:13.877 11:23:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:13.877 11:23:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:13.877 11:23:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:13.877 11:23:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:13.877 11:23:39 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:13.877 11:23:39 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:13.877 11:23:39 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:13.877 11:23:39 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.877 11:23:39 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.878 11:23:39 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.878 11:23:39 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:19:13.878 11:23:39 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.878 11:23:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:19:13.878 11:23:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:13.878 11:23:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:13.878 11:23:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:13.878 11:23:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:13.878 11:23:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:13.878 11:23:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:13.878 11:23:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:13.878 11:23:39 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:13.878 11:23:39 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:13.878 11:23:39 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:13.878 11:23:39 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:19:13.878 11:23:39 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:13.878 11:23:39 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:13.878 11:23:39 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:13.878 ************************************ 00:19:13.878 START TEST nvmf_shutdown_tc1 00:19:13.878 ************************************ 00:19:13.878 11:23:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:19:13.878 11:23:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:19:13.878 11:23:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:13.878 11:23:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:13.878 11:23:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:13.878 11:23:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:13.878 11:23:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:13.878 11:23:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:13.878 11:23:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:13.878 11:23:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:13.878 11:23:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:13.878 11:23:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:13.878 11:23:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:13.878 11:23:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:13.878 11:23:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:16.411 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:16.411 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:16.411 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:16.411 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:16.411 11:23:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:16.411 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:16.411 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:16.411 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:16.411 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:16.411 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:16.411 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:16.411 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:16.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:16.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:19:16.411 00:19:16.411 --- 10.0.0.2 ping statistics --- 00:19:16.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.411 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:19:16.411 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:16.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:16.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:19:16.411 00:19:16.411 --- 10.0.0.1 ping statistics --- 00:19:16.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.411 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:19:16.412 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:16.412 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:19:16.412 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:16.412 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:16.412 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:16.412 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:16.412 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:16.412 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:16.412 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:16.412 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:16.412 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:16.412 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:16.412 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:16.412 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=614841 00:19:16.412 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:16.412 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 614841 00:19:16.412 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 614841 ']' 00:19:16.412 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.412 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:16.412 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.412 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:16.412 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:16.412 [2024-07-12 11:23:42.184414] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:19:16.412 [2024-07-12 11:23:42.184504] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.412 EAL: No free 2048 kB hugepages reported on node 1 00:19:16.412 [2024-07-12 11:23:42.252690] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:16.412 [2024-07-12 11:23:42.369054] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:16.412 [2024-07-12 11:23:42.369107] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:16.412 [2024-07-12 11:23:42.369136] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:16.412 [2024-07-12 11:23:42.369148] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:16.412 [2024-07-12 11:23:42.369159] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:16.412 [2024-07-12 11:23:42.372887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:16.412 [2024-07-12 11:23:42.372963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:16.412 [2024-07-12 11:23:42.376901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:16.412 [2024-07-12 11:23:42.376915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:16.412 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:16.412 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:19:16.412 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:16.412 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:16.412 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:16.412 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:16.412 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:16.412 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.412 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:16.412 [2024-07-12 11:23:42.536813] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:16.670 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.670 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:16.671 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:16.671 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:16.671 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:16.671 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:16.671 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:16.671 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:16.671 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:16.671 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:16.671 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:16.671 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:16.671 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:16.671 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:16.671 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:16.671 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:16.671 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:16.671 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:16.671 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:16.671 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:16.671 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:16.671 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:16.671 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:16.671 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:16.671 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:16.671 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:16.671 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:16.671 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.671 11:23:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:16.671 Malloc1 00:19:16.671 [2024-07-12 11:23:42.622477] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:16.671 Malloc2 00:19:16.671 Malloc3 00:19:16.671 Malloc4 00:19:16.671 Malloc5 00:19:16.929 Malloc6 00:19:16.929 Malloc7 00:19:16.929 Malloc8 00:19:16.929 Malloc9 00:19:16.929 Malloc10 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=615024 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 615024 /var/tmp/bdevperf.sock 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 615024 ']' 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:17.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:17.188 { 00:19:17.188 "params": { 00:19:17.188 "name": "Nvme$subsystem", 00:19:17.188 "trtype": "$TEST_TRANSPORT", 00:19:17.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:17.188 "adrfam": "ipv4", 00:19:17.188 "trsvcid": "$NVMF_PORT", 00:19:17.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:17.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:17.188 "hdgst": ${hdgst:-false}, 00:19:17.188 "ddgst": ${ddgst:-false} 00:19:17.188 }, 00:19:17.188 "method": "bdev_nvme_attach_controller" 00:19:17.188 } 00:19:17.188 EOF 00:19:17.188 )") 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:17.188 { 00:19:17.188 "params": { 00:19:17.188 "name": "Nvme$subsystem", 00:19:17.188 "trtype": "$TEST_TRANSPORT", 00:19:17.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:17.188 "adrfam": "ipv4", 00:19:17.188 "trsvcid": "$NVMF_PORT", 00:19:17.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:17.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:17.188 "hdgst": ${hdgst:-false}, 00:19:17.188 "ddgst": ${ddgst:-false} 00:19:17.188 }, 00:19:17.188 "method": "bdev_nvme_attach_controller" 00:19:17.188 } 00:19:17.188 EOF 00:19:17.188 )") 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:17.188 { 00:19:17.188 "params": { 00:19:17.188 "name": "Nvme$subsystem", 00:19:17.188 "trtype": "$TEST_TRANSPORT", 00:19:17.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:17.188 "adrfam": "ipv4", 00:19:17.188 "trsvcid": "$NVMF_PORT", 00:19:17.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:17.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:17.188 "hdgst": ${hdgst:-false}, 00:19:17.188 "ddgst": ${ddgst:-false} 00:19:17.188 }, 00:19:17.188 "method": "bdev_nvme_attach_controller" 00:19:17.188 } 00:19:17.188 EOF 00:19:17.188 )") 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:17.188 { 00:19:17.188 "params": { 00:19:17.188 "name": "Nvme$subsystem", 00:19:17.188 "trtype": "$TEST_TRANSPORT", 00:19:17.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:17.188 "adrfam": "ipv4", 00:19:17.188 "trsvcid": "$NVMF_PORT", 00:19:17.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:17.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:17.188 "hdgst": ${hdgst:-false}, 00:19:17.188 "ddgst": ${ddgst:-false} 00:19:17.188 }, 00:19:17.188 "method": "bdev_nvme_attach_controller" 00:19:17.188 } 00:19:17.188 EOF 00:19:17.188 )") 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:17.188 { 00:19:17.188 "params": { 00:19:17.188 "name": "Nvme$subsystem", 00:19:17.188 "trtype": "$TEST_TRANSPORT", 00:19:17.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:17.188 "adrfam": "ipv4", 00:19:17.188 "trsvcid": "$NVMF_PORT", 00:19:17.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:17.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:17.188 "hdgst": ${hdgst:-false}, 00:19:17.188 "ddgst": ${ddgst:-false} 00:19:17.188 }, 00:19:17.188 "method": "bdev_nvme_attach_controller" 00:19:17.188 } 00:19:17.188 EOF 00:19:17.188 )") 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:17.188 { 00:19:17.188 "params": { 00:19:17.188 "name": "Nvme$subsystem", 00:19:17.188 "trtype": "$TEST_TRANSPORT", 00:19:17.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:17.188 "adrfam": "ipv4", 00:19:17.188 "trsvcid": "$NVMF_PORT", 00:19:17.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:17.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:17.188 "hdgst": ${hdgst:-false}, 00:19:17.188 "ddgst": ${ddgst:-false} 00:19:17.188 }, 00:19:17.188 "method": "bdev_nvme_attach_controller" 00:19:17.188 } 00:19:17.188 EOF 00:19:17.188 )") 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:17.188 { 00:19:17.188 "params": { 00:19:17.188 "name": "Nvme$subsystem", 00:19:17.188 "trtype": "$TEST_TRANSPORT", 00:19:17.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:17.188 "adrfam": "ipv4", 00:19:17.188 "trsvcid": "$NVMF_PORT", 00:19:17.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:17.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:17.188 "hdgst": ${hdgst:-false}, 00:19:17.188 "ddgst": ${ddgst:-false} 00:19:17.188 }, 00:19:17.188 "method": "bdev_nvme_attach_controller" 00:19:17.188 } 00:19:17.188 EOF 00:19:17.188 )") 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:17.188 { 00:19:17.188 "params": { 00:19:17.188 "name": "Nvme$subsystem", 00:19:17.188 "trtype": "$TEST_TRANSPORT", 00:19:17.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:17.188 "adrfam": "ipv4", 00:19:17.188 "trsvcid": "$NVMF_PORT", 00:19:17.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:17.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:17.188 "hdgst": ${hdgst:-false}, 00:19:17.188 "ddgst": ${ddgst:-false} 00:19:17.188 }, 00:19:17.188 "method": "bdev_nvme_attach_controller" 00:19:17.188 } 00:19:17.188 EOF 00:19:17.188 )") 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:17.188 { 00:19:17.188 "params": { 00:19:17.188 "name": "Nvme$subsystem", 00:19:17.188 "trtype": "$TEST_TRANSPORT", 00:19:17.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:17.188 "adrfam": "ipv4", 00:19:17.188 "trsvcid": "$NVMF_PORT", 00:19:17.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:17.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:17.188 "hdgst": ${hdgst:-false}, 00:19:17.188 "ddgst": ${ddgst:-false} 00:19:17.188 }, 00:19:17.188 "method": "bdev_nvme_attach_controller" 00:19:17.188 } 00:19:17.188 EOF 00:19:17.188 )") 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:17.188 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:17.189 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:17.189 { 00:19:17.189 "params": { 00:19:17.189 "name": "Nvme$subsystem", 00:19:17.189 "trtype": "$TEST_TRANSPORT", 00:19:17.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:17.189 "adrfam": "ipv4", 00:19:17.189 "trsvcid": "$NVMF_PORT", 00:19:17.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:17.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:17.189 "hdgst": ${hdgst:-false}, 00:19:17.189 "ddgst": ${ddgst:-false} 00:19:17.189 }, 00:19:17.189 "method": "bdev_nvme_attach_controller" 00:19:17.189 } 00:19:17.189 EOF 00:19:17.189 )") 00:19:17.189 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:17.189 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:17.189 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:17.189 11:23:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:17.189 "params": { 00:19:17.189 "name": "Nvme1", 00:19:17.189 "trtype": "tcp", 00:19:17.189 "traddr": "10.0.0.2", 00:19:17.189 "adrfam": "ipv4", 00:19:17.189 "trsvcid": "4420", 00:19:17.189 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:17.189 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:17.189 "hdgst": false, 00:19:17.189 "ddgst": false 00:19:17.189 }, 00:19:17.189 "method": "bdev_nvme_attach_controller" 00:19:17.189 },{ 00:19:17.189 "params": { 00:19:17.189 "name": "Nvme2", 00:19:17.189 "trtype": "tcp", 00:19:17.189 "traddr": "10.0.0.2", 00:19:17.189 "adrfam": "ipv4", 00:19:17.189 "trsvcid": "4420", 00:19:17.189 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:17.189 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:17.189 "hdgst": false, 00:19:17.189 "ddgst": false 00:19:17.189 }, 00:19:17.189 "method": "bdev_nvme_attach_controller" 00:19:17.189 },{ 00:19:17.189 "params": { 00:19:17.189 "name": "Nvme3", 00:19:17.189 "trtype": "tcp", 00:19:17.189 "traddr": "10.0.0.2", 00:19:17.189 "adrfam": "ipv4", 00:19:17.189 "trsvcid": "4420", 00:19:17.189 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:17.189 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:17.189 "hdgst": false, 00:19:17.189 "ddgst": false 00:19:17.189 }, 00:19:17.189 "method": "bdev_nvme_attach_controller" 00:19:17.189 },{ 00:19:17.189 "params": { 00:19:17.189 "name": "Nvme4", 00:19:17.189 "trtype": "tcp", 00:19:17.189 "traddr": "10.0.0.2", 00:19:17.189 "adrfam": "ipv4", 00:19:17.189 "trsvcid": "4420", 00:19:17.189 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:17.189 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:17.189 "hdgst": false, 00:19:17.189 "ddgst": false 00:19:17.189 }, 00:19:17.189 "method": "bdev_nvme_attach_controller" 00:19:17.189 },{ 00:19:17.189 "params": { 00:19:17.189 "name": "Nvme5", 00:19:17.189 "trtype": "tcp", 00:19:17.189 "traddr": "10.0.0.2", 00:19:17.189 "adrfam": "ipv4", 00:19:17.189 "trsvcid": "4420", 00:19:17.189 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:17.189 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:17.189 "hdgst": false, 00:19:17.189 "ddgst": false 00:19:17.189 }, 00:19:17.189 "method": "bdev_nvme_attach_controller" 00:19:17.189 },{ 00:19:17.189 "params": { 00:19:17.189 "name": "Nvme6", 00:19:17.189 "trtype": "tcp", 00:19:17.189 "traddr": "10.0.0.2", 00:19:17.189 "adrfam": "ipv4", 00:19:17.189 "trsvcid": "4420", 00:19:17.189 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:17.189 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:17.189 "hdgst": false, 00:19:17.189 "ddgst": false 00:19:17.189 }, 00:19:17.189 "method": "bdev_nvme_attach_controller" 00:19:17.189 },{ 00:19:17.189 "params": { 00:19:17.189 "name": "Nvme7", 00:19:17.189 "trtype": "tcp", 00:19:17.189 "traddr": "10.0.0.2", 00:19:17.189 "adrfam": "ipv4", 00:19:17.189 "trsvcid": "4420", 00:19:17.189 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:17.189 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:17.189 "hdgst": false, 00:19:17.189 "ddgst": false 00:19:17.189 }, 00:19:17.189 "method": "bdev_nvme_attach_controller" 00:19:17.189 },{ 00:19:17.189 "params": { 00:19:17.189 "name": "Nvme8", 00:19:17.189 "trtype": "tcp", 00:19:17.189 "traddr": "10.0.0.2", 00:19:17.189 "adrfam": "ipv4", 00:19:17.189 "trsvcid": "4420", 00:19:17.189 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:17.189 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:17.189 "hdgst": false, 00:19:17.189 "ddgst": false 00:19:17.189 }, 00:19:17.189 "method": "bdev_nvme_attach_controller" 00:19:17.189 },{ 00:19:17.189 "params": { 00:19:17.189 "name": "Nvme9", 00:19:17.189 "trtype": "tcp", 00:19:17.189 "traddr": "10.0.0.2", 00:19:17.189 "adrfam": "ipv4", 00:19:17.189 "trsvcid": "4420", 00:19:17.189 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:17.189 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:17.189 "hdgst": false, 00:19:17.189 "ddgst": false 00:19:17.189 }, 00:19:17.189 "method": "bdev_nvme_attach_controller" 00:19:17.189 },{ 00:19:17.189 "params": { 00:19:17.189 "name": "Nvme10", 00:19:17.189 "trtype": "tcp", 00:19:17.189 "traddr": "10.0.0.2", 00:19:17.189 "adrfam": "ipv4", 00:19:17.189 "trsvcid": "4420", 00:19:17.189 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:17.189 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:17.189 "hdgst": false, 00:19:17.189 "ddgst": false 00:19:17.189 }, 00:19:17.189 "method": "bdev_nvme_attach_controller" 00:19:17.189 }' 00:19:17.189 [2024-07-12 11:23:43.135026] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:19:17.189 [2024-07-12 11:23:43.135104] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:17.189 EAL: No free 2048 kB hugepages reported on node 1 00:19:17.189 [2024-07-12 11:23:43.198773] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.189 [2024-07-12 11:23:43.309118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.120 11:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:19.120 11:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:19:19.120 11:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:19.120 11:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.120 11:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:19.120 11:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.120 11:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 615024 00:19:19.120 11:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:19:19.120 11:23:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:19:20.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 615024 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:19:20.053 11:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 614841 00:19:20.053 11:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:19:20.053 11:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:20.054 11:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:20.054 11:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:20.054 11:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:20.054 11:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:20.054 { 00:19:20.054 "params": { 00:19:20.054 "name": "Nvme$subsystem", 00:19:20.054 "trtype": "$TEST_TRANSPORT", 00:19:20.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:20.054 "adrfam": "ipv4", 00:19:20.054 "trsvcid": "$NVMF_PORT", 00:19:20.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:20.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:20.054 "hdgst": ${hdgst:-false}, 00:19:20.054 "ddgst": ${ddgst:-false} 00:19:20.054 }, 00:19:20.054 "method": "bdev_nvme_attach_controller" 00:19:20.054 } 00:19:20.054 EOF 00:19:20.054 )") 00:19:20.054 11:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:20.054 11:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:20.054 11:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:20.054 { 00:19:20.054 "params": { 00:19:20.054 "name": "Nvme$subsystem", 00:19:20.054 "trtype": "$TEST_TRANSPORT", 00:19:20.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:20.054 "adrfam": "ipv4", 00:19:20.054 "trsvcid": "$NVMF_PORT", 00:19:20.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:20.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:20.054 "hdgst": ${hdgst:-false}, 00:19:20.054 "ddgst": ${ddgst:-false} 00:19:20.054 }, 00:19:20.054 "method": "bdev_nvme_attach_controller" 00:19:20.054 } 00:19:20.054 EOF 00:19:20.054 )") 00:19:20.054 11:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:20.054 11:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:20.054 11:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:20.054 { 00:19:20.054 "params": { 00:19:20.054 "name": "Nvme$subsystem", 00:19:20.054 "trtype": "$TEST_TRANSPORT", 00:19:20.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:20.054 "adrfam": "ipv4", 00:19:20.054 "trsvcid": "$NVMF_PORT", 00:19:20.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:20.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:20.054 "hdgst": ${hdgst:-false}, 00:19:20.054 "ddgst": ${ddgst:-false} 00:19:20.054 }, 00:19:20.054 "method": "bdev_nvme_attach_controller" 00:19:20.054 } 00:19:20.054 EOF 00:19:20.054 )") 00:19:20.054 11:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:20.054 11:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:20.054 11:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:20.054 { 00:19:20.054 "params": { 00:19:20.054 "name": "Nvme$subsystem", 00:19:20.054 "trtype": "$TEST_TRANSPORT", 00:19:20.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:20.054 "adrfam": "ipv4", 00:19:20.054 "trsvcid": "$NVMF_PORT", 00:19:20.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:20.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:20.054 "hdgst": ${hdgst:-false}, 00:19:20.054 "ddgst": ${ddgst:-false} 00:19:20.054 }, 00:19:20.054 "method": "bdev_nvme_attach_controller" 00:19:20.054 } 00:19:20.054 EOF 00:19:20.054 )") 00:19:20.054 11:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:20.054 11:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:20.054 11:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:20.054 { 00:19:20.054 "params": { 00:19:20.054 "name": "Nvme$subsystem", 00:19:20.054 "trtype": "$TEST_TRANSPORT", 00:19:20.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:20.054 "adrfam": "ipv4", 00:19:20.054 "trsvcid": "$NVMF_PORT", 00:19:20.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:20.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:20.054 "hdgst": ${hdgst:-false}, 00:19:20.054 "ddgst": ${ddgst:-false} 00:19:20.054 }, 00:19:20.054 "method": "bdev_nvme_attach_controller" 00:19:20.054 } 00:19:20.054 EOF 00:19:20.054 )") 00:19:20.054 11:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:20.054 11:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:20.054 11:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:20.054 { 00:19:20.054 "params": { 00:19:20.054 "name": "Nvme$subsystem", 00:19:20.054 "trtype": "$TEST_TRANSPORT", 00:19:20.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:20.054 "adrfam": "ipv4", 00:19:20.054 "trsvcid": "$NVMF_PORT", 00:19:20.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:20.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:20.054 "hdgst": ${hdgst:-false}, 00:19:20.054 "ddgst": ${ddgst:-false} 00:19:20.054 }, 00:19:20.054 "method": "bdev_nvme_attach_controller" 00:19:20.054 } 00:19:20.054 EOF 00:19:20.054 )") 00:19:20.054 11:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:20.054 11:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:20.054 11:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:20.054 { 00:19:20.054 "params": { 00:19:20.054 "name": "Nvme$subsystem", 00:19:20.054 "trtype": "$TEST_TRANSPORT", 00:19:20.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:20.054 "adrfam": "ipv4", 00:19:20.054 "trsvcid": "$NVMF_PORT", 00:19:20.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:20.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:20.054 "hdgst": ${hdgst:-false}, 00:19:20.054 "ddgst": ${ddgst:-false} 00:19:20.054 }, 00:19:20.054 "method": "bdev_nvme_attach_controller" 00:19:20.054 } 00:19:20.054 EOF 00:19:20.054 )") 00:19:20.054 11:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:20.054 11:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:20.054 11:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:20.054 { 00:19:20.054 "params": { 00:19:20.054 "name": "Nvme$subsystem", 00:19:20.054 "trtype": "$TEST_TRANSPORT", 00:19:20.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:20.054 "adrfam": "ipv4", 00:19:20.054 "trsvcid": "$NVMF_PORT", 00:19:20.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:20.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:20.054 "hdgst": ${hdgst:-false}, 00:19:20.054 "ddgst": ${ddgst:-false} 00:19:20.054 }, 00:19:20.054 "method": "bdev_nvme_attach_controller" 00:19:20.054 } 00:19:20.054 EOF 00:19:20.054 )") 00:19:20.054 11:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:20.054 11:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:20.054 11:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:20.054 { 00:19:20.054 "params": { 00:19:20.054 "name": "Nvme$subsystem", 00:19:20.054 "trtype": "$TEST_TRANSPORT", 00:19:20.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:20.054 "adrfam": "ipv4", 00:19:20.054 "trsvcid": "$NVMF_PORT", 00:19:20.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:20.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:20.054 "hdgst": ${hdgst:-false}, 00:19:20.054 "ddgst": ${ddgst:-false} 00:19:20.054 }, 00:19:20.054 "method": "bdev_nvme_attach_controller" 00:19:20.054 } 00:19:20.054 EOF 00:19:20.054 )") 00:19:20.054 11:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:20.054 11:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:20.054 11:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:20.054 { 00:19:20.054 "params": { 00:19:20.054 "name": "Nvme$subsystem", 00:19:20.054 "trtype": "$TEST_TRANSPORT", 00:19:20.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:20.054 "adrfam": "ipv4", 00:19:20.054 "trsvcid": "$NVMF_PORT", 00:19:20.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:20.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:20.054 "hdgst": ${hdgst:-false}, 00:19:20.054 "ddgst": ${ddgst:-false} 00:19:20.054 }, 00:19:20.054 "method": "bdev_nvme_attach_controller" 00:19:20.054 } 00:19:20.054 EOF 00:19:20.054 )") 00:19:20.054 11:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:20.054 11:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:20.054 11:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:20.054 11:23:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:20.054 "params": { 00:19:20.054 "name": "Nvme1", 00:19:20.054 "trtype": "tcp", 00:19:20.054 "traddr": "10.0.0.2", 00:19:20.054 "adrfam": "ipv4", 00:19:20.054 "trsvcid": "4420", 00:19:20.054 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:20.054 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:20.054 "hdgst": false, 00:19:20.054 "ddgst": false 00:19:20.054 }, 00:19:20.054 "method": "bdev_nvme_attach_controller" 00:19:20.054 },{ 00:19:20.054 "params": { 00:19:20.054 "name": "Nvme2", 00:19:20.054 "trtype": "tcp", 00:19:20.054 "traddr": "10.0.0.2", 00:19:20.054 "adrfam": "ipv4", 00:19:20.054 "trsvcid": "4420", 00:19:20.054 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:20.054 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:20.054 "hdgst": false, 00:19:20.054 "ddgst": false 00:19:20.054 }, 00:19:20.054 "method": "bdev_nvme_attach_controller" 00:19:20.054 },{ 00:19:20.054 "params": { 00:19:20.054 "name": "Nvme3", 00:19:20.054 "trtype": "tcp", 00:19:20.054 "traddr": "10.0.0.2", 00:19:20.055 "adrfam": "ipv4", 00:19:20.055 "trsvcid": "4420", 00:19:20.055 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:20.055 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:20.055 "hdgst": false, 00:19:20.055 "ddgst": false 00:19:20.055 }, 00:19:20.055 "method": "bdev_nvme_attach_controller" 00:19:20.055 },{ 00:19:20.055 "params": { 00:19:20.055 "name": "Nvme4", 00:19:20.055 "trtype": "tcp", 00:19:20.055 "traddr": "10.0.0.2", 00:19:20.055 "adrfam": "ipv4", 00:19:20.055 "trsvcid": "4420", 00:19:20.055 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:20.055 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:20.055 "hdgst": false, 00:19:20.055 "ddgst": false 00:19:20.055 }, 00:19:20.055 "method": "bdev_nvme_attach_controller" 00:19:20.055 },{ 00:19:20.055 "params": { 00:19:20.055 "name": "Nvme5", 00:19:20.055 "trtype": "tcp", 00:19:20.055 "traddr": "10.0.0.2", 00:19:20.055 "adrfam": "ipv4", 00:19:20.055 "trsvcid": "4420", 00:19:20.055 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:20.055 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:20.055 "hdgst": false, 00:19:20.055 "ddgst": false 00:19:20.055 }, 00:19:20.055 "method": "bdev_nvme_attach_controller" 00:19:20.055 },{ 00:19:20.055 "params": { 00:19:20.055 "name": "Nvme6", 00:19:20.055 "trtype": "tcp", 00:19:20.055 "traddr": "10.0.0.2", 00:19:20.055 "adrfam": "ipv4", 00:19:20.055 "trsvcid": "4420", 00:19:20.055 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:20.055 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:20.055 "hdgst": false, 00:19:20.055 "ddgst": false 00:19:20.055 }, 00:19:20.055 "method": "bdev_nvme_attach_controller" 00:19:20.055 },{ 00:19:20.055 "params": { 00:19:20.055 "name": "Nvme7", 00:19:20.055 "trtype": "tcp", 00:19:20.055 "traddr": "10.0.0.2", 00:19:20.055 "adrfam": "ipv4", 00:19:20.055 "trsvcid": "4420", 00:19:20.055 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:20.055 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:20.055 "hdgst": false, 00:19:20.055 "ddgst": false 00:19:20.055 }, 00:19:20.055 "method": "bdev_nvme_attach_controller" 00:19:20.055 },{ 00:19:20.055 "params": { 00:19:20.055 "name": "Nvme8", 00:19:20.055 "trtype": "tcp", 00:19:20.055 "traddr": "10.0.0.2", 00:19:20.055 "adrfam": "ipv4", 00:19:20.055 "trsvcid": "4420", 00:19:20.055 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:20.055 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:20.055 "hdgst": false, 00:19:20.055 "ddgst": false 00:19:20.055 }, 00:19:20.055 "method": "bdev_nvme_attach_controller" 00:19:20.055 },{ 00:19:20.055 "params": { 00:19:20.055 "name": "Nvme9", 00:19:20.055 "trtype": "tcp", 00:19:20.055 "traddr": "10.0.0.2", 00:19:20.055 "adrfam": "ipv4", 00:19:20.055 "trsvcid": "4420", 00:19:20.055 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:20.055 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:20.055 "hdgst": false, 00:19:20.055 "ddgst": false 00:19:20.055 }, 00:19:20.055 "method": "bdev_nvme_attach_controller" 00:19:20.055 },{ 00:19:20.055 "params": { 00:19:20.055 "name": "Nvme10", 00:19:20.055 "trtype": "tcp", 00:19:20.055 "traddr": "10.0.0.2", 00:19:20.055 "adrfam": "ipv4", 00:19:20.055 "trsvcid": "4420", 00:19:20.055 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:20.055 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:20.055 "hdgst": false, 00:19:20.055 "ddgst": false 00:19:20.055 }, 00:19:20.055 "method": "bdev_nvme_attach_controller" 00:19:20.055 }' 00:19:20.055 [2024-07-12 11:23:46.158814] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:19:20.055 [2024-07-12 11:23:46.158912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid615341 ] 00:19:20.314 EAL: No free 2048 kB hugepages reported on node 1 00:19:20.314 [2024-07-12 11:23:46.226417] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.314 [2024-07-12 11:23:46.336628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.685 Running I/O for 1 seconds... 00:19:23.076 00:19:23.076 Latency(us) 00:19:23.076 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.076 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:23.076 Verification LBA range: start 0x0 length 0x400 00:19:23.076 Nvme1n1 : 1.07 239.09 14.94 0.00 0.00 264867.65 18641.35 239230.67 00:19:23.076 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:23.076 Verification LBA range: start 0x0 length 0x400 00:19:23.076 Nvme2n1 : 1.06 241.73 15.11 0.00 0.00 257291.76 23301.69 246997.90 00:19:23.076 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:23.076 Verification LBA range: start 0x0 length 0x400 00:19:23.076 Nvme3n1 : 1.06 240.51 15.03 0.00 0.00 253442.28 22136.60 248551.35 00:19:23.076 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:23.076 Verification LBA range: start 0x0 length 0x400 00:19:23.076 Nvme4n1 : 1.10 235.66 14.73 0.00 0.00 250214.21 20680.25 256318.58 00:19:23.076 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:23.076 Verification LBA range: start 0x0 length 0x400 00:19:23.076 Nvme5n1 : 1.14 224.50 14.03 0.00 0.00 264004.65 38059.43 264085.81 00:19:23.076 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:23.076 Verification LBA range: start 0x0 length 0x400 00:19:23.076 Nvme6n1 : 1.11 230.92 14.43 0.00 0.00 251723.66 21165.70 251658.24 00:19:23.076 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:23.076 Verification LBA range: start 0x0 length 0x400 00:19:23.076 Nvme7n1 : 1.14 223.82 13.99 0.00 0.00 255867.45 21456.97 253211.69 00:19:23.076 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:23.076 Verification LBA range: start 0x0 length 0x400 00:19:23.076 Nvme8n1 : 1.18 271.94 17.00 0.00 0.00 207839.84 16311.18 253211.69 00:19:23.076 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:23.076 Verification LBA range: start 0x0 length 0x400 00:19:23.076 Nvme9n1 : 1.18 270.85 16.93 0.00 0.00 205193.06 14854.83 251658.24 00:19:23.076 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:23.076 Verification LBA range: start 0x0 length 0x400 00:19:23.076 Nvme10n1 : 1.23 260.82 16.30 0.00 0.00 203075.28 7961.41 268746.15 00:19:23.076 =================================================================================================================== 00:19:23.076 Total : 2439.83 152.49 0.00 0.00 238858.08 7961.41 268746.15 00:19:23.334 11:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:19:23.334 11:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:23.334 11:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:23.334 11:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:23.334 11:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:23.334 11:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:23.334 11:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:19:23.334 11:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:23.334 11:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:19:23.334 11:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:23.334 11:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:23.334 rmmod nvme_tcp 00:19:23.334 rmmod nvme_fabrics 00:19:23.334 rmmod nvme_keyring 00:19:23.334 11:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:23.334 11:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:19:23.334 11:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:19:23.334 11:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 614841 ']' 00:19:23.334 11:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 614841 00:19:23.334 11:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 614841 ']' 00:19:23.334 11:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 614841 00:19:23.334 11:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:19:23.334 11:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:23.334 11:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 614841 00:19:23.334 11:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:23.334 11:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:23.334 11:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 614841' 00:19:23.334 killing process with pid 614841 00:19:23.334 11:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 614841 00:19:23.334 11:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 614841 00:19:23.901 11:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:23.901 11:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:23.901 11:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:23.901 11:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:23.901 11:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:23.901 11:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.901 11:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:23.901 11:23:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.805 11:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:25.805 00:19:25.805 real 0m12.024s 00:19:25.805 user 0m34.941s 00:19:25.805 sys 0m3.159s 00:19:25.806 11:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:25.806 11:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:25.806 ************************************ 00:19:25.806 END TEST nvmf_shutdown_tc1 00:19:25.806 ************************************ 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:26.063 ************************************ 00:19:26.063 START TEST nvmf_shutdown_tc2 00:19:26.063 ************************************ 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:26.063 11:23:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:26.063 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:26.063 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:26.063 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:26.063 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.063 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:26.064 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:26.064 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:19:26.064 00:19:26.064 --- 10.0.0.2 ping statistics --- 00:19:26.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.064 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:26.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:26.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:19:26.064 00:19:26.064 --- 10.0.0.1 ping statistics --- 00:19:26.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.064 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=616209 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 616209 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 616209 ']' 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:26.064 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:26.321 [2024-07-12 11:23:52.222277] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:19:26.321 [2024-07-12 11:23:52.222362] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:26.321 EAL: No free 2048 kB hugepages reported on node 1 00:19:26.321 [2024-07-12 11:23:52.285338] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:26.321 [2024-07-12 11:23:52.386462] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:26.321 [2024-07-12 11:23:52.386514] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:26.321 [2024-07-12 11:23:52.386542] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:26.321 [2024-07-12 11:23:52.386553] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:26.321 [2024-07-12 11:23:52.386562] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:26.321 [2024-07-12 11:23:52.386652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.321 [2024-07-12 11:23:52.386764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:26.321 [2024-07-12 11:23:52.386930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:26.321 [2024-07-12 11:23:52.386934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:26.578 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:26.578 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:19:26.578 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:26.578 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:26.578 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:26.578 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:26.578 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:26.578 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.578 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:26.578 [2024-07-12 11:23:52.542835] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:26.578 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.578 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:26.578 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:26.578 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:26.578 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:26.578 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:26.578 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:26.578 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:26.578 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:26.578 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:26.578 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:26.578 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:26.578 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:26.578 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:26.578 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:26.578 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:26.578 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:26.578 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:26.578 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:26.578 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:26.578 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:26.578 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:26.578 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:26.578 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:26.578 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:26.578 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:26.578 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:26.578 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.578 11:23:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:26.578 Malloc1 00:19:26.578 [2024-07-12 11:23:52.623334] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:26.578 Malloc2 00:19:26.578 Malloc3 00:19:26.835 Malloc4 00:19:26.835 Malloc5 00:19:26.835 Malloc6 00:19:26.835 Malloc7 00:19:26.835 Malloc8 00:19:27.091 Malloc9 00:19:27.091 Malloc10 00:19:27.091 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.091 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:27.091 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:27.091 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:27.091 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=616386 00:19:27.091 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 616386 /var/tmp/bdevperf.sock 00:19:27.091 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 616386 ']' 00:19:27.091 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:27.091 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:27.091 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:27.091 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:27.091 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:27.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:27.091 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:19:27.091 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:27.091 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:19:27.091 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:27.091 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:27.091 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:27.091 { 00:19:27.091 "params": { 00:19:27.091 "name": "Nvme$subsystem", 00:19:27.091 "trtype": "$TEST_TRANSPORT", 00:19:27.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.091 "adrfam": "ipv4", 00:19:27.091 "trsvcid": "$NVMF_PORT", 00:19:27.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.091 "hdgst": ${hdgst:-false}, 00:19:27.091 "ddgst": ${ddgst:-false} 00:19:27.091 }, 00:19:27.091 "method": "bdev_nvme_attach_controller" 00:19:27.091 } 00:19:27.091 EOF 00:19:27.091 )") 00:19:27.091 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:27.091 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:27.091 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:27.091 { 00:19:27.091 "params": { 00:19:27.091 "name": "Nvme$subsystem", 00:19:27.091 "trtype": "$TEST_TRANSPORT", 00:19:27.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.091 "adrfam": "ipv4", 00:19:27.091 "trsvcid": "$NVMF_PORT", 00:19:27.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.091 "hdgst": ${hdgst:-false}, 00:19:27.091 "ddgst": ${ddgst:-false} 00:19:27.091 }, 00:19:27.091 "method": "bdev_nvme_attach_controller" 00:19:27.091 } 00:19:27.091 EOF 00:19:27.091 )") 00:19:27.091 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:27.091 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:27.092 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:27.092 { 00:19:27.092 "params": { 00:19:27.092 "name": "Nvme$subsystem", 00:19:27.092 "trtype": "$TEST_TRANSPORT", 00:19:27.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.092 "adrfam": "ipv4", 00:19:27.092 "trsvcid": "$NVMF_PORT", 00:19:27.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.092 "hdgst": ${hdgst:-false}, 00:19:27.092 "ddgst": ${ddgst:-false} 00:19:27.092 }, 00:19:27.092 "method": "bdev_nvme_attach_controller" 00:19:27.092 } 00:19:27.092 EOF 00:19:27.092 )") 00:19:27.092 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:27.092 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:27.092 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:27.092 { 00:19:27.092 "params": { 00:19:27.092 "name": "Nvme$subsystem", 00:19:27.092 "trtype": "$TEST_TRANSPORT", 00:19:27.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.092 "adrfam": "ipv4", 00:19:27.092 "trsvcid": "$NVMF_PORT", 00:19:27.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.092 "hdgst": ${hdgst:-false}, 00:19:27.092 "ddgst": ${ddgst:-false} 00:19:27.092 }, 00:19:27.092 "method": "bdev_nvme_attach_controller" 00:19:27.092 } 00:19:27.092 EOF 00:19:27.092 )") 00:19:27.092 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:27.092 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:27.092 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:27.092 { 00:19:27.092 "params": { 00:19:27.092 "name": "Nvme$subsystem", 00:19:27.092 "trtype": "$TEST_TRANSPORT", 00:19:27.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.092 "adrfam": "ipv4", 00:19:27.092 "trsvcid": "$NVMF_PORT", 00:19:27.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.092 "hdgst": ${hdgst:-false}, 00:19:27.092 "ddgst": ${ddgst:-false} 00:19:27.092 }, 00:19:27.092 "method": "bdev_nvme_attach_controller" 00:19:27.092 } 00:19:27.092 EOF 00:19:27.092 )") 00:19:27.092 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:27.092 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:27.092 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:27.092 { 00:19:27.092 "params": { 00:19:27.092 "name": "Nvme$subsystem", 00:19:27.092 "trtype": "$TEST_TRANSPORT", 00:19:27.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.092 "adrfam": "ipv4", 00:19:27.092 "trsvcid": "$NVMF_PORT", 00:19:27.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.092 "hdgst": ${hdgst:-false}, 00:19:27.092 "ddgst": ${ddgst:-false} 00:19:27.092 }, 00:19:27.092 "method": "bdev_nvme_attach_controller" 00:19:27.092 } 00:19:27.092 EOF 00:19:27.092 )") 00:19:27.092 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:27.092 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:27.092 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:27.092 { 00:19:27.092 "params": { 00:19:27.092 "name": "Nvme$subsystem", 00:19:27.092 "trtype": "$TEST_TRANSPORT", 00:19:27.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.092 "adrfam": "ipv4", 00:19:27.092 "trsvcid": "$NVMF_PORT", 00:19:27.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.092 "hdgst": ${hdgst:-false}, 00:19:27.092 "ddgst": ${ddgst:-false} 00:19:27.092 }, 00:19:27.092 "method": "bdev_nvme_attach_controller" 00:19:27.092 } 00:19:27.092 EOF 00:19:27.092 )") 00:19:27.092 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:27.092 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:27.092 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:27.092 { 00:19:27.092 "params": { 00:19:27.092 "name": "Nvme$subsystem", 00:19:27.092 "trtype": "$TEST_TRANSPORT", 00:19:27.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.092 "adrfam": "ipv4", 00:19:27.092 "trsvcid": "$NVMF_PORT", 00:19:27.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.092 "hdgst": ${hdgst:-false}, 00:19:27.092 "ddgst": ${ddgst:-false} 00:19:27.092 }, 00:19:27.092 "method": "bdev_nvme_attach_controller" 00:19:27.092 } 00:19:27.092 EOF 00:19:27.092 )") 00:19:27.092 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:27.092 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:27.092 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:27.092 { 00:19:27.092 "params": { 00:19:27.092 "name": "Nvme$subsystem", 00:19:27.092 "trtype": "$TEST_TRANSPORT", 00:19:27.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.092 "adrfam": "ipv4", 00:19:27.092 "trsvcid": "$NVMF_PORT", 00:19:27.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.092 "hdgst": ${hdgst:-false}, 00:19:27.092 "ddgst": ${ddgst:-false} 00:19:27.092 }, 00:19:27.092 "method": "bdev_nvme_attach_controller" 00:19:27.092 } 00:19:27.092 EOF 00:19:27.092 )") 00:19:27.092 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:27.092 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:27.092 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:27.092 { 00:19:27.092 "params": { 00:19:27.092 "name": "Nvme$subsystem", 00:19:27.092 "trtype": "$TEST_TRANSPORT", 00:19:27.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.092 "adrfam": "ipv4", 00:19:27.092 "trsvcid": "$NVMF_PORT", 00:19:27.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.092 "hdgst": ${hdgst:-false}, 00:19:27.092 "ddgst": ${ddgst:-false} 00:19:27.092 }, 00:19:27.092 "method": "bdev_nvme_attach_controller" 00:19:27.092 } 00:19:27.092 EOF 00:19:27.092 )") 00:19:27.092 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:27.092 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:19:27.092 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:19:27.092 11:23:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:27.092 "params": { 00:19:27.092 "name": "Nvme1", 00:19:27.092 "trtype": "tcp", 00:19:27.092 "traddr": "10.0.0.2", 00:19:27.092 "adrfam": "ipv4", 00:19:27.092 "trsvcid": "4420", 00:19:27.092 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:27.092 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:27.092 "hdgst": false, 00:19:27.092 "ddgst": false 00:19:27.092 }, 00:19:27.092 "method": "bdev_nvme_attach_controller" 00:19:27.092 },{ 00:19:27.092 "params": { 00:19:27.092 "name": "Nvme2", 00:19:27.092 "trtype": "tcp", 00:19:27.092 "traddr": "10.0.0.2", 00:19:27.092 "adrfam": "ipv4", 00:19:27.092 "trsvcid": "4420", 00:19:27.092 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:27.092 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:27.092 "hdgst": false, 00:19:27.092 "ddgst": false 00:19:27.092 }, 00:19:27.092 "method": "bdev_nvme_attach_controller" 00:19:27.092 },{ 00:19:27.092 "params": { 00:19:27.092 "name": "Nvme3", 00:19:27.092 "trtype": "tcp", 00:19:27.092 "traddr": "10.0.0.2", 00:19:27.092 "adrfam": "ipv4", 00:19:27.092 "trsvcid": "4420", 00:19:27.092 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:27.092 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:27.092 "hdgst": false, 00:19:27.092 "ddgst": false 00:19:27.092 }, 00:19:27.092 "method": "bdev_nvme_attach_controller" 00:19:27.092 },{ 00:19:27.092 "params": { 00:19:27.092 "name": "Nvme4", 00:19:27.092 "trtype": "tcp", 00:19:27.092 "traddr": "10.0.0.2", 00:19:27.092 "adrfam": "ipv4", 00:19:27.092 "trsvcid": "4420", 00:19:27.092 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:27.092 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:27.092 "hdgst": false, 00:19:27.092 "ddgst": false 00:19:27.092 }, 00:19:27.092 "method": "bdev_nvme_attach_controller" 00:19:27.092 },{ 00:19:27.092 "params": { 00:19:27.092 "name": "Nvme5", 00:19:27.092 "trtype": "tcp", 00:19:27.092 "traddr": "10.0.0.2", 00:19:27.092 "adrfam": "ipv4", 00:19:27.092 "trsvcid": "4420", 00:19:27.092 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:27.092 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:27.092 "hdgst": false, 00:19:27.092 "ddgst": false 00:19:27.092 }, 00:19:27.092 "method": "bdev_nvme_attach_controller" 00:19:27.092 },{ 00:19:27.092 "params": { 00:19:27.092 "name": "Nvme6", 00:19:27.092 "trtype": "tcp", 00:19:27.092 "traddr": "10.0.0.2", 00:19:27.092 "adrfam": "ipv4", 00:19:27.092 "trsvcid": "4420", 00:19:27.092 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:27.092 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:27.092 "hdgst": false, 00:19:27.092 "ddgst": false 00:19:27.092 }, 00:19:27.092 "method": "bdev_nvme_attach_controller" 00:19:27.092 },{ 00:19:27.092 "params": { 00:19:27.092 "name": "Nvme7", 00:19:27.092 "trtype": "tcp", 00:19:27.092 "traddr": "10.0.0.2", 00:19:27.092 "adrfam": "ipv4", 00:19:27.092 "trsvcid": "4420", 00:19:27.092 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:27.092 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:27.092 "hdgst": false, 00:19:27.092 "ddgst": false 00:19:27.092 }, 00:19:27.092 "method": "bdev_nvme_attach_controller" 00:19:27.092 },{ 00:19:27.092 "params": { 00:19:27.092 "name": "Nvme8", 00:19:27.092 "trtype": "tcp", 00:19:27.092 "traddr": "10.0.0.2", 00:19:27.092 "adrfam": "ipv4", 00:19:27.092 "trsvcid": "4420", 00:19:27.092 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:27.092 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:27.092 "hdgst": false, 00:19:27.092 "ddgst": false 00:19:27.092 }, 00:19:27.092 "method": "bdev_nvme_attach_controller" 00:19:27.092 },{ 00:19:27.092 "params": { 00:19:27.092 "name": "Nvme9", 00:19:27.092 "trtype": "tcp", 00:19:27.092 "traddr": "10.0.0.2", 00:19:27.092 "adrfam": "ipv4", 00:19:27.092 "trsvcid": "4420", 00:19:27.092 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:27.092 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:27.092 "hdgst": false, 00:19:27.092 "ddgst": false 00:19:27.092 }, 00:19:27.092 "method": "bdev_nvme_attach_controller" 00:19:27.092 },{ 00:19:27.092 "params": { 00:19:27.092 "name": "Nvme10", 00:19:27.092 "trtype": "tcp", 00:19:27.092 "traddr": "10.0.0.2", 00:19:27.092 "adrfam": "ipv4", 00:19:27.092 "trsvcid": "4420", 00:19:27.092 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:27.092 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:27.092 "hdgst": false, 00:19:27.092 "ddgst": false 00:19:27.092 }, 00:19:27.092 "method": "bdev_nvme_attach_controller" 00:19:27.092 }' 00:19:27.092 [2024-07-12 11:23:53.121136] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:19:27.092 [2024-07-12 11:23:53.121229] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid616386 ] 00:19:27.092 EAL: No free 2048 kB hugepages reported on node 1 00:19:27.092 [2024-07-12 11:23:53.183756] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.348 [2024-07-12 11:23:53.293338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.718 Running I/O for 10 seconds... 00:19:28.976 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:28.976 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:19:28.976 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:28.976 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.976 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:29.234 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.234 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:29.234 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:29.234 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:29.234 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:19:29.234 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:19:29.234 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:29.234 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:29.234 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:29.234 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.234 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:29.234 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:29.234 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.234 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:19:29.234 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:19:29.234 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:29.491 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:29.491 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:29.491 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:29.491 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:29.491 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.491 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:29.491 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.491 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:19:29.491 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:19:29.491 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:19:29.491 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:19:29.491 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:19:29.491 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 616386 00:19:29.491 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 616386 ']' 00:19:29.491 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 616386 00:19:29.491 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:19:29.491 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:29.491 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 616386 00:19:29.492 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:29.492 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:29.492 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 616386' 00:19:29.492 killing process with pid 616386 00:19:29.492 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 616386 00:19:29.492 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 616386 00:19:29.492 Received shutdown signal, test time was about 0.784934 seconds 00:19:29.492 00:19:29.492 Latency(us) 00:19:29.492 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.492 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:29.492 Verification LBA range: start 0x0 length 0x400 00:19:29.492 Nvme1n1 : 0.77 250.44 15.65 0.00 0.00 251373.80 18058.81 250104.79 00:19:29.492 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:29.492 Verification LBA range: start 0x0 length 0x400 00:19:29.492 Nvme2n1 : 0.76 251.61 15.73 0.00 0.00 244240.69 20097.71 251658.24 00:19:29.492 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:29.492 Verification LBA range: start 0x0 length 0x400 00:19:29.492 Nvme3n1 : 0.76 254.28 15.89 0.00 0.00 234838.85 30486.38 240784.12 00:19:29.492 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:29.492 Verification LBA range: start 0x0 length 0x400 00:19:29.492 Nvme4n1 : 0.76 253.10 15.82 0.00 0.00 230430.85 30874.74 239230.67 00:19:29.492 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:29.492 Verification LBA range: start 0x0 length 0x400 00:19:29.492 Nvme5n1 : 0.74 174.01 10.88 0.00 0.00 325949.82 21165.70 267192.70 00:19:29.492 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:29.492 Verification LBA range: start 0x0 length 0x400 00:19:29.492 Nvme6n1 : 0.78 247.40 15.46 0.00 0.00 223702.03 20874.43 248551.35 00:19:29.492 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:29.492 Verification LBA range: start 0x0 length 0x400 00:19:29.492 Nvme7n1 : 0.78 244.88 15.30 0.00 0.00 221209.03 19515.16 254765.13 00:19:29.492 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:29.492 Verification LBA range: start 0x0 length 0x400 00:19:29.492 Nvme8n1 : 0.78 247.71 15.48 0.00 0.00 212244.86 18252.99 211268.65 00:19:29.492 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:29.492 Verification LBA range: start 0x0 length 0x400 00:19:29.492 Nvme9n1 : 0.78 246.46 15.40 0.00 0.00 207583.76 41360.50 234570.33 00:19:29.492 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:29.492 Verification LBA range: start 0x0 length 0x400 00:19:29.492 Nvme10n1 : 0.75 171.33 10.71 0.00 0.00 286697.62 21262.79 288940.94 00:19:29.492 =================================================================================================================== 00:19:29.492 Total : 2341.23 146.33 0.00 0.00 239363.09 18058.81 288940.94 00:19:29.749 11:23:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:19:31.119 11:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 616209 00:19:31.119 11:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:19:31.119 11:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:31.119 11:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:31.119 11:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:31.119 11:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:31.119 11:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:31.119 11:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:19:31.119 11:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:31.119 11:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:19:31.119 11:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:31.119 11:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:31.119 rmmod nvme_tcp 00:19:31.119 rmmod nvme_fabrics 00:19:31.119 rmmod nvme_keyring 00:19:31.119 11:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:31.119 11:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:19:31.119 11:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:19:31.119 11:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 616209 ']' 00:19:31.119 11:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 616209 00:19:31.119 11:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 616209 ']' 00:19:31.119 11:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 616209 00:19:31.119 11:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:19:31.119 11:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:31.119 11:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 616209 00:19:31.119 11:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:31.119 11:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:31.119 11:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 616209' 00:19:31.119 killing process with pid 616209 00:19:31.119 11:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 616209 00:19:31.119 11:23:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 616209 00:19:31.379 11:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:31.379 11:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:31.379 11:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:31.379 11:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:31.379 11:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:31.379 11:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.379 11:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:31.379 11:23:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.910 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:33.910 00:19:33.910 real 0m7.503s 00:19:33.910 user 0m22.296s 00:19:33.910 sys 0m1.398s 00:19:33.910 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:33.910 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:33.910 ************************************ 00:19:33.910 END TEST nvmf_shutdown_tc2 00:19:33.910 ************************************ 00:19:33.910 11:23:59 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:19:33.910 11:23:59 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:19:33.910 11:23:59 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:33.910 11:23:59 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:33.910 11:23:59 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:33.910 ************************************ 00:19:33.910 START TEST nvmf_shutdown_tc3 00:19:33.910 ************************************ 00:19:33.910 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:33.911 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:33.911 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:33.911 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:33.911 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:33.911 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:33.911 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:33.911 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:19:33.911 00:19:33.911 --- 10.0.0.2 ping statistics --- 00:19:33.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.911 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:19:33.912 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:33.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:33.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:19:33.912 00:19:33.912 --- 10.0.0.1 ping statistics --- 00:19:33.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.912 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:19:33.912 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:33.912 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:19:33.912 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:33.912 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:33.912 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:33.912 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:33.912 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:33.912 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:33.912 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:33.912 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:33.912 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:33.912 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:33.912 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:33.912 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=617173 00:19:33.912 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:33.912 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 617173 00:19:33.912 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 617173 ']' 00:19:33.912 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.912 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:33.912 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.912 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:33.912 11:23:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:33.912 [2024-07-12 11:23:59.767195] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:19:33.912 [2024-07-12 11:23:59.767300] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:33.912 EAL: No free 2048 kB hugepages reported on node 1 00:19:33.912 [2024-07-12 11:23:59.837030] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:33.912 [2024-07-12 11:23:59.952219] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:33.912 [2024-07-12 11:23:59.952275] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:33.912 [2024-07-12 11:23:59.952304] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:33.912 [2024-07-12 11:23:59.952316] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:33.912 [2024-07-12 11:23:59.952327] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:33.912 [2024-07-12 11:23:59.952412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:33.912 [2024-07-12 11:23:59.952465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:33.912 [2024-07-12 11:23:59.952514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:33.912 [2024-07-12 11:23:59.952516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:34.170 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:34.170 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:19:34.170 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:34.170 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:34.170 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:34.170 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:34.170 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:34.170 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.170 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:34.170 [2024-07-12 11:24:00.107611] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:34.170 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.170 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:34.170 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:34.170 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:34.170 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:34.170 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:34.170 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:34.170 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:34.170 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:34.170 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:34.170 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:34.170 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:34.170 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:34.170 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:34.170 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:34.170 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:34.170 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:34.170 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:34.170 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:34.170 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:34.170 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:34.170 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:34.170 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:34.170 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:34.170 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:34.170 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:34.170 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:34.170 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.171 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:34.171 Malloc1 00:19:34.171 [2024-07-12 11:24:00.183668] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:34.171 Malloc2 00:19:34.171 Malloc3 00:19:34.428 Malloc4 00:19:34.428 Malloc5 00:19:34.428 Malloc6 00:19:34.428 Malloc7 00:19:34.428 Malloc8 00:19:34.428 Malloc9 00:19:34.686 Malloc10 00:19:34.686 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.686 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:34.686 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:34.686 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:34.686 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=617397 00:19:34.686 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 617397 /var/tmp/bdevperf.sock 00:19:34.686 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 617397 ']' 00:19:34.686 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:34.686 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:34.686 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:34.686 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:34.686 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:19:34.686 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:34.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:34.686 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:19:34.686 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:34.686 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:34.686 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:34.686 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:34.686 { 00:19:34.686 "params": { 00:19:34.686 "name": "Nvme$subsystem", 00:19:34.686 "trtype": "$TEST_TRANSPORT", 00:19:34.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:34.686 "adrfam": "ipv4", 00:19:34.686 "trsvcid": "$NVMF_PORT", 00:19:34.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:34.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:34.686 "hdgst": ${hdgst:-false}, 00:19:34.686 "ddgst": ${ddgst:-false} 00:19:34.686 }, 00:19:34.686 "method": "bdev_nvme_attach_controller" 00:19:34.686 } 00:19:34.686 EOF 00:19:34.686 )") 00:19:34.686 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:34.686 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:34.686 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:34.686 { 00:19:34.686 "params": { 00:19:34.686 "name": "Nvme$subsystem", 00:19:34.686 "trtype": "$TEST_TRANSPORT", 00:19:34.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:34.686 "adrfam": "ipv4", 00:19:34.686 "trsvcid": "$NVMF_PORT", 00:19:34.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:34.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:34.687 "hdgst": ${hdgst:-false}, 00:19:34.687 "ddgst": ${ddgst:-false} 00:19:34.687 }, 00:19:34.687 "method": "bdev_nvme_attach_controller" 00:19:34.687 } 00:19:34.687 EOF 00:19:34.687 )") 00:19:34.687 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:34.687 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:34.687 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:34.687 { 00:19:34.687 "params": { 00:19:34.687 "name": "Nvme$subsystem", 00:19:34.687 "trtype": "$TEST_TRANSPORT", 00:19:34.687 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:34.687 "adrfam": "ipv4", 00:19:34.687 "trsvcid": "$NVMF_PORT", 00:19:34.687 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:34.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:34.687 "hdgst": ${hdgst:-false}, 00:19:34.687 "ddgst": ${ddgst:-false} 00:19:34.687 }, 00:19:34.687 "method": "bdev_nvme_attach_controller" 00:19:34.687 } 00:19:34.687 EOF 00:19:34.687 )") 00:19:34.687 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:34.687 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:34.687 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:34.687 { 00:19:34.687 "params": { 00:19:34.687 "name": "Nvme$subsystem", 00:19:34.687 "trtype": "$TEST_TRANSPORT", 00:19:34.687 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:34.687 "adrfam": "ipv4", 00:19:34.687 "trsvcid": "$NVMF_PORT", 00:19:34.687 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:34.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:34.687 "hdgst": ${hdgst:-false}, 00:19:34.687 "ddgst": ${ddgst:-false} 00:19:34.687 }, 00:19:34.687 "method": "bdev_nvme_attach_controller" 00:19:34.687 } 00:19:34.687 EOF 00:19:34.687 )") 00:19:34.687 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:34.687 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:34.687 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:34.687 { 00:19:34.687 "params": { 00:19:34.687 "name": "Nvme$subsystem", 00:19:34.687 "trtype": "$TEST_TRANSPORT", 00:19:34.687 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:34.687 "adrfam": "ipv4", 00:19:34.687 "trsvcid": "$NVMF_PORT", 00:19:34.687 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:34.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:34.687 "hdgst": ${hdgst:-false}, 00:19:34.687 "ddgst": ${ddgst:-false} 00:19:34.687 }, 00:19:34.687 "method": "bdev_nvme_attach_controller" 00:19:34.687 } 00:19:34.687 EOF 00:19:34.687 )") 00:19:34.687 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:34.687 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:34.687 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:34.687 { 00:19:34.687 "params": { 00:19:34.687 "name": "Nvme$subsystem", 00:19:34.687 "trtype": "$TEST_TRANSPORT", 00:19:34.687 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:34.687 "adrfam": "ipv4", 00:19:34.687 "trsvcid": "$NVMF_PORT", 00:19:34.687 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:34.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:34.687 "hdgst": ${hdgst:-false}, 00:19:34.687 "ddgst": ${ddgst:-false} 00:19:34.687 }, 00:19:34.687 "method": "bdev_nvme_attach_controller" 00:19:34.687 } 00:19:34.687 EOF 00:19:34.687 )") 00:19:34.687 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:34.687 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:34.687 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:34.687 { 00:19:34.687 "params": { 00:19:34.687 "name": "Nvme$subsystem", 00:19:34.687 "trtype": "$TEST_TRANSPORT", 00:19:34.687 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:34.687 "adrfam": "ipv4", 00:19:34.687 "trsvcid": "$NVMF_PORT", 00:19:34.687 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:34.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:34.687 "hdgst": ${hdgst:-false}, 00:19:34.687 "ddgst": ${ddgst:-false} 00:19:34.687 }, 00:19:34.687 "method": "bdev_nvme_attach_controller" 00:19:34.687 } 00:19:34.687 EOF 00:19:34.687 )") 00:19:34.687 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:34.687 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:34.687 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:34.687 { 00:19:34.687 "params": { 00:19:34.687 "name": "Nvme$subsystem", 00:19:34.687 "trtype": "$TEST_TRANSPORT", 00:19:34.687 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:34.687 "adrfam": "ipv4", 00:19:34.687 "trsvcid": "$NVMF_PORT", 00:19:34.687 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:34.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:34.687 "hdgst": ${hdgst:-false}, 00:19:34.687 "ddgst": ${ddgst:-false} 00:19:34.687 }, 00:19:34.687 "method": "bdev_nvme_attach_controller" 00:19:34.687 } 00:19:34.687 EOF 00:19:34.687 )") 00:19:34.687 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:34.687 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:34.687 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:34.687 { 00:19:34.687 "params": { 00:19:34.687 "name": "Nvme$subsystem", 00:19:34.687 "trtype": "$TEST_TRANSPORT", 00:19:34.687 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:34.687 "adrfam": "ipv4", 00:19:34.687 "trsvcid": "$NVMF_PORT", 00:19:34.687 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:34.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:34.687 "hdgst": ${hdgst:-false}, 00:19:34.687 "ddgst": ${ddgst:-false} 00:19:34.687 }, 00:19:34.687 "method": "bdev_nvme_attach_controller" 00:19:34.687 } 00:19:34.687 EOF 00:19:34.687 )") 00:19:34.687 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:34.687 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:34.687 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:34.687 { 00:19:34.687 "params": { 00:19:34.687 "name": "Nvme$subsystem", 00:19:34.687 "trtype": "$TEST_TRANSPORT", 00:19:34.687 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:34.687 "adrfam": "ipv4", 00:19:34.687 "trsvcid": "$NVMF_PORT", 00:19:34.687 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:34.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:34.687 "hdgst": ${hdgst:-false}, 00:19:34.687 "ddgst": ${ddgst:-false} 00:19:34.687 }, 00:19:34.687 "method": "bdev_nvme_attach_controller" 00:19:34.687 } 00:19:34.687 EOF 00:19:34.687 )") 00:19:34.687 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:34.687 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:19:34.687 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:19:34.687 11:24:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:34.687 "params": { 00:19:34.687 "name": "Nvme1", 00:19:34.687 "trtype": "tcp", 00:19:34.687 "traddr": "10.0.0.2", 00:19:34.687 "adrfam": "ipv4", 00:19:34.687 "trsvcid": "4420", 00:19:34.687 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.687 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:34.687 "hdgst": false, 00:19:34.687 "ddgst": false 00:19:34.687 }, 00:19:34.687 "method": "bdev_nvme_attach_controller" 00:19:34.687 },{ 00:19:34.687 "params": { 00:19:34.687 "name": "Nvme2", 00:19:34.687 "trtype": "tcp", 00:19:34.687 "traddr": "10.0.0.2", 00:19:34.687 "adrfam": "ipv4", 00:19:34.687 "trsvcid": "4420", 00:19:34.687 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:34.687 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:34.687 "hdgst": false, 00:19:34.687 "ddgst": false 00:19:34.687 }, 00:19:34.687 "method": "bdev_nvme_attach_controller" 00:19:34.687 },{ 00:19:34.687 "params": { 00:19:34.687 "name": "Nvme3", 00:19:34.687 "trtype": "tcp", 00:19:34.687 "traddr": "10.0.0.2", 00:19:34.687 "adrfam": "ipv4", 00:19:34.687 "trsvcid": "4420", 00:19:34.687 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:34.687 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:34.687 "hdgst": false, 00:19:34.687 "ddgst": false 00:19:34.687 }, 00:19:34.687 "method": "bdev_nvme_attach_controller" 00:19:34.687 },{ 00:19:34.687 "params": { 00:19:34.687 "name": "Nvme4", 00:19:34.687 "trtype": "tcp", 00:19:34.687 "traddr": "10.0.0.2", 00:19:34.687 "adrfam": "ipv4", 00:19:34.687 "trsvcid": "4420", 00:19:34.687 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:34.687 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:34.687 "hdgst": false, 00:19:34.687 "ddgst": false 00:19:34.687 }, 00:19:34.687 "method": "bdev_nvme_attach_controller" 00:19:34.687 },{ 00:19:34.687 "params": { 00:19:34.687 "name": "Nvme5", 00:19:34.687 "trtype": "tcp", 00:19:34.687 "traddr": "10.0.0.2", 00:19:34.687 "adrfam": "ipv4", 00:19:34.687 "trsvcid": "4420", 00:19:34.687 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:34.687 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:34.687 "hdgst": false, 00:19:34.687 "ddgst": false 00:19:34.687 }, 00:19:34.687 "method": "bdev_nvme_attach_controller" 00:19:34.687 },{ 00:19:34.687 "params": { 00:19:34.688 "name": "Nvme6", 00:19:34.688 "trtype": "tcp", 00:19:34.688 "traddr": "10.0.0.2", 00:19:34.688 "adrfam": "ipv4", 00:19:34.688 "trsvcid": "4420", 00:19:34.688 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:34.688 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:34.688 "hdgst": false, 00:19:34.688 "ddgst": false 00:19:34.688 }, 00:19:34.688 "method": "bdev_nvme_attach_controller" 00:19:34.688 },{ 00:19:34.688 "params": { 00:19:34.688 "name": "Nvme7", 00:19:34.688 "trtype": "tcp", 00:19:34.688 "traddr": "10.0.0.2", 00:19:34.688 "adrfam": "ipv4", 00:19:34.688 "trsvcid": "4420", 00:19:34.688 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:34.688 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:34.688 "hdgst": false, 00:19:34.688 "ddgst": false 00:19:34.688 }, 00:19:34.688 "method": "bdev_nvme_attach_controller" 00:19:34.688 },{ 00:19:34.688 "params": { 00:19:34.688 "name": "Nvme8", 00:19:34.688 "trtype": "tcp", 00:19:34.688 "traddr": "10.0.0.2", 00:19:34.688 "adrfam": "ipv4", 00:19:34.688 "trsvcid": "4420", 00:19:34.688 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:34.688 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:34.688 "hdgst": false, 00:19:34.688 "ddgst": false 00:19:34.688 }, 00:19:34.688 "method": "bdev_nvme_attach_controller" 00:19:34.688 },{ 00:19:34.688 "params": { 00:19:34.688 "name": "Nvme9", 00:19:34.688 "trtype": "tcp", 00:19:34.688 "traddr": "10.0.0.2", 00:19:34.688 "adrfam": "ipv4", 00:19:34.688 "trsvcid": "4420", 00:19:34.688 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:34.688 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:34.688 "hdgst": false, 00:19:34.688 "ddgst": false 00:19:34.688 }, 00:19:34.688 "method": "bdev_nvme_attach_controller" 00:19:34.688 },{ 00:19:34.688 "params": { 00:19:34.688 "name": "Nvme10", 00:19:34.688 "trtype": "tcp", 00:19:34.688 "traddr": "10.0.0.2", 00:19:34.688 "adrfam": "ipv4", 00:19:34.688 "trsvcid": "4420", 00:19:34.688 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:34.688 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:34.688 "hdgst": false, 00:19:34.688 "ddgst": false 00:19:34.688 }, 00:19:34.688 "method": "bdev_nvme_attach_controller" 00:19:34.688 }' 00:19:34.688 [2024-07-12 11:24:00.683058] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:19:34.688 [2024-07-12 11:24:00.683136] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid617397 ] 00:19:34.688 EAL: No free 2048 kB hugepages reported on node 1 00:19:34.688 [2024-07-12 11:24:00.747073] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.946 [2024-07-12 11:24:00.858917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.317 Running I/O for 10 seconds... 00:19:36.575 11:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:36.575 11:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:19:36.575 11:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:36.575 11:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.575 11:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:36.575 11:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.575 11:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:36.575 11:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:36.575 11:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:36.575 11:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:36.575 11:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:19:36.575 11:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:19:36.575 11:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:36.575 11:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:36.575 11:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:36.575 11:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:36.575 11:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.575 11:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:36.575 11:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.832 11:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=23 00:19:36.832 11:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 23 -ge 100 ']' 00:19:36.832 11:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:36.832 11:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:36.832 11:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:36.832 11:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:36.832 11:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:36.832 11:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.832 11:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:37.111 11:24:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.111 11:24:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:19:37.111 11:24:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:19:37.111 11:24:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:19:37.111 11:24:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:19:37.111 11:24:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:19:37.111 11:24:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 617173 00:19:37.111 11:24:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 617173 ']' 00:19:37.111 11:24:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 617173 00:19:37.111 11:24:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:19:37.111 11:24:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:37.111 11:24:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 617173 00:19:37.111 11:24:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:37.111 11:24:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:37.111 11:24:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 617173' 00:19:37.111 killing process with pid 617173 00:19:37.111 11:24:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 617173 00:19:37.111 11:24:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 617173 00:19:37.111 [2024-07-12 11:24:03.030929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031036] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031063] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031089] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031128] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031140] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031174] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031242] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031268] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031295] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031355] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031368] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031382] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031420] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031433] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031447] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031486] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031499] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031511] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031523] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031561] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031574] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031586] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031611] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031716] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031752] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031765] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031778] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031790] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031802] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031814] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.031826] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feb1a0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.033243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.033277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.033292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.033307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.033320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.033333] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.033347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.033361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.033374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.033386] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.033400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.111 [2024-07-12 11:24:03.033413] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033425] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033521] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033534] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033547] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033573] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033586] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033611] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033716] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033756] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033780] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033848] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033861] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033915] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033934] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033959] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.033995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.034007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.034019] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.034032] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.034044] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.034056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.034068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.034081] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.034094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cede0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.036652] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.036686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.036717] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.036730] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.036742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.036755] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.036769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.036788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.036802] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.036814] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.036828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.036841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.036854] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.036872] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.036887] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.036900] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.036912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.036925] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.036949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.036964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.036976] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.036990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.037003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.037016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.037028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.037040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.037055] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.037067] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.037079] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.037092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.037105] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.037118] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.037131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.037143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.037159] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.037172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.037190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.037205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.037220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.112 [2024-07-12 11:24:03.037233] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.037245] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.037257] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.037270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.037283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.037296] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.037308] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.037320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.037332] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.037345] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.037357] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.037369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.037380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.037392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.037404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.037417] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.037428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.037440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.037459] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.037472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.037484] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.037496] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.037511] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.037523] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febae0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.038804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.038838] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.038854] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.038876] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.038892] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.038917] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.038931] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.038943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.038956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.038969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.038982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.038994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039006] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039019] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039032] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039058] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039109] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039121] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039134] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039148] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039168] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039225] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039304] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039317] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039355] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039368] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039416] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039553] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039565] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039601] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039614] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039627] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039639] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.039651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1febfa0 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.040479] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.040509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.040524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.040538] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.113 [2024-07-12 11:24:03.040551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.040563] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.040575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.040587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.040599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.040613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.040625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.040638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.040651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.040663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.040675] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.040687] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.040705] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.040719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.040731] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.040743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.040757] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.040769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.040782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.040794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.040806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.040820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.040832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.040844] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.040856] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.040912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.040941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.040954] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.040966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.040979] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.040993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.041006] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.041018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.041030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.041044] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.041056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.041068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.041080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.041092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.041109] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.041100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.114 [2024-07-12 11:24:03.041122] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.041134] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.041139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.114 [2024-07-12 11:24:03.041146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.041157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-07-12 11:24:03.041159] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with id:0 cdw10:00000000 cdw11:00000000 00:19:37.114 the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.041173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-12 11:24:03.041173] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.114 the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.041197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.041198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.114 [2024-07-12 11:24:03.041210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.041213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.114 [2024-07-12 11:24:03.041222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.041228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.114 [2024-07-12 11:24:03.041235] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.041241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.114 [2024-07-12 11:24:03.041249] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.041254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef600 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.041262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.041275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.041287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.041298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.041310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.041322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.041334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.041342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-12 11:24:03.041349] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with id:0 cdw10:00000000 cdw11:00000000 00:19:37.114 the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.041366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.114 [2024-07-12 11:24:03.041366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec440 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.041383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.114 [2024-07-12 11:24:03.041397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.114 [2024-07-12 11:24:03.041410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.114 [2024-07-12 11:24:03.041423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.114 [2024-07-12 11:24:03.041437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.114 [2024-07-12 11:24:03.041451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.114 [2024-07-12 11:24:03.041463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc75c60 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.041514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.114 [2024-07-12 11:24:03.041536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.114 [2024-07-12 11:24:03.041551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.114 [2024-07-12 11:24:03.041565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.114 [2024-07-12 11:24:03.041579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.114 [2024-07-12 11:24:03.041592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.114 [2024-07-12 11:24:03.041606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.114 [2024-07-12 11:24:03.041619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.114 [2024-07-12 11:24:03.041632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7f450 is same with the state(5) to be set 00:19:37.114 [2024-07-12 11:24:03.041677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.114 [2024-07-12 11:24:03.041697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.114 [2024-07-12 11:24:03.041712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.114 [2024-07-12 11:24:03.041725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.114 [2024-07-12 11:24:03.041739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.115 [2024-07-12 11:24:03.041752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.115 [2024-07-12 11:24:03.041773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.115 [2024-07-12 11:24:03.041787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.115 [2024-07-12 11:24:03.041800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1f240 is same with the state(5) to be set 00:19:37.115 [2024-07-12 11:24:03.041842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.115 [2024-07-12 11:24:03.041862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.115 [2024-07-12 11:24:03.041887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.115 [2024-07-12 11:24:03.041901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.115 [2024-07-12 11:24:03.041924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.115 [2024-07-12 11:24:03.041937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.115 [2024-07-12 11:24:03.041950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.115 [2024-07-12 11:24:03.041963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.115 [2024-07-12 11:24:03.041975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc53830 is same with the state(5) to be set 00:19:37.115 [2024-07-12 11:24:03.043442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.115 [2024-07-12 11:24:03.043469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.115 [2024-07-12 11:24:03.043494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.115 [2024-07-12 11:24:03.043511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.115 [2024-07-12 11:24:03.043526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.115 [2024-07-12 11:24:03.043540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.115 [2024-07-12 11:24:03.043556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.115 [2024-07-12 11:24:03.043570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.115 [2024-07-12 11:24:03.043585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.115 [2024-07-12 11:24:03.043598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.115 [2024-07-12 11:24:03.043614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.115 [2024-07-12 11:24:03.043627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.115 [2024-07-12 11:24:03.043642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.115 [2024-07-12 11:24:03.043663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.115 [2024-07-12 11:24:03.043679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.115 [2024-07-12 11:24:03.043693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.115 [2024-07-12 11:24:03.043708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.115 [2024-07-12 11:24:03.043721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.115 [2024-07-12 11:24:03.043737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.115 [2024-07-12 11:24:03.043750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.115 [2024-07-12 11:24:03.043765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.115 [2024-07-12 11:24:03.043778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.115 [2024-07-12 11:24:03.043793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.115 [2024-07-12 11:24:03.043806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.115 [2024-07-12 11:24:03.043822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.115 [2024-07-12 11:24:03.043836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.115 [2024-07-12 11:24:03.043850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.115 [2024-07-12 11:24:03.043864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.115 [2024-07-12 11:24:03.043890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.115 [2024-07-12 11:24:03.043904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.115 [2024-07-12 11:24:03.043930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.115 [2024-07-12 11:24:03.043944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.115 [2024-07-12 11:24:03.043959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.115 [2024-07-12 11:24:03.043973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.115 [2024-07-12 11:24:03.043988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.115 [2024-07-12 11:24:03.044002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.115 [2024-07-12 11:24:03.044017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.115 [2024-07-12 11:24:03.044030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.115 [2024-07-12 11:24:03.044049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.115 [2024-07-12 11:24:03.044063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.115 [2024-07-12 11:24:03.044078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.115 [2024-07-12 11:24:03.044092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.115 [2024-07-12 11:24:03.044107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.115 [2024-07-12 11:24:03.044120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.115 [2024-07-12 11:24:03.044135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.115 [2024-07-12 11:24:03.044130] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.115 [2024-07-12 11:24:03.044149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.115 [2024-07-12 11:24:03.044158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.115 [2024-07-12 11:24:03.044165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.115 [2024-07-12 11:24:03.044181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 11:24:03.044181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.115 the state(5) to be set 00:19:37.115 [2024-07-12 11:24:03.044197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.115 [2024-07-12 11:24:03.044199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.115 [2024-07-12 11:24:03.044210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.115 [2024-07-12 11:24:03.044213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.115 [2024-07-12 11:24:03.044222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.115 [2024-07-12 11:24:03.044229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.116 [2024-07-12 11:24:03.044234] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.116 [2024-07-12 11:24:03.044243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.116 [2024-07-12 11:24:03.044247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.116 [2024-07-12 11:24:03.044258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:1[2024-07-12 11:24:03.044260] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.116 the state(5) to be set 00:19:37.116 [2024-07-12 11:24:03.044274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 11:24:03.044274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.116 the state(5) to be set 00:19:37.116 [2024-07-12 11:24:03.044294] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with [2024-07-12 11:24:03.044295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:1the state(5) to be set 00:19:37.116 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.116 [2024-07-12 11:24:03.044309] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.116 [2024-07-12 11:24:03.044311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.116 [2024-07-12 11:24:03.044321] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.116 [2024-07-12 11:24:03.044327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.116 [2024-07-12 11:24:03.044334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.116 [2024-07-12 11:24:03.044341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.116 [2024-07-12 11:24:03.044347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.116 [2024-07-12 11:24:03.044356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.116 [2024-07-12 11:24:03.044359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.116 [2024-07-12 11:24:03.044371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 11:24:03.044372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.116 the state(5) to be set 00:19:37.116 [2024-07-12 11:24:03.044386] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.116 [2024-07-12 11:24:03.044388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.116 [2024-07-12 11:24:03.044398] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.116 [2024-07-12 11:24:03.044402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.116 [2024-07-12 11:24:03.044410] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.116 [2024-07-12 11:24:03.044417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.116 [2024-07-12 11:24:03.044423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.116 [2024-07-12 11:24:03.044431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.116 [2024-07-12 11:24:03.044436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.116 [2024-07-12 11:24:03.044446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.116 [2024-07-12 11:24:03.044449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.116 [2024-07-12 11:24:03.044460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.116 [2024-07-12 11:24:03.044462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.116 [2024-07-12 11:24:03.044475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.116 [2024-07-12 11:24:03.044478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.116 [2024-07-12 11:24:03.044489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.116 [2024-07-12 11:24:03.044492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.116 [2024-07-12 11:24:03.044504] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with [2024-07-12 11:24:03.044504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:1the state(5) to be set 00:19:37.116 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.116 [2024-07-12 11:24:03.044517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.116 [2024-07-12 11:24:03.044519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.116 [2024-07-12 11:24:03.044530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.116 [2024-07-12 11:24:03.044535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.116 [2024-07-12 11:24:03.044543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.116 [2024-07-12 11:24:03.044549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.116 [2024-07-12 11:24:03.044555] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.116 [2024-07-12 11:24:03.044565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.116 [2024-07-12 11:24:03.044568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.116 [2024-07-12 11:24:03.044579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.116 [2024-07-12 11:24:03.044581] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.116 [2024-07-12 11:24:03.044594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.116 [2024-07-12 11:24:03.044599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.116 [2024-07-12 11:24:03.044608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.116 [2024-07-12 11:24:03.044613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.116 [2024-07-12 11:24:03.044624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:1[2024-07-12 11:24:03.044626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.116 the state(5) to be set 00:19:37.116 [2024-07-12 11:24:03.044639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 11:24:03.044640] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.116 the state(5) to be set 00:19:37.116 [2024-07-12 11:24:03.044654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with [2024-07-12 11:24:03.044656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:1the state(5) to be set 00:19:37.116 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.116 [2024-07-12 11:24:03.044671] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with [2024-07-12 11:24:03.044672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:19:37.116 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.116 [2024-07-12 11:24:03.044686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.116 [2024-07-12 11:24:03.044689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.116 [2024-07-12 11:24:03.044700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.116 [2024-07-12 11:24:03.044703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.116 [2024-07-12 11:24:03.044713] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.116 [2024-07-12 11:24:03.044719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.116 [2024-07-12 11:24:03.044726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.116 [2024-07-12 11:24:03.044733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.116 [2024-07-12 11:24:03.044738] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.116 [2024-07-12 11:24:03.044748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.116 [2024-07-12 11:24:03.044752] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.116 [2024-07-12 11:24:03.044762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.116 [2024-07-12 11:24:03.044764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.116 [2024-07-12 11:24:03.044777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with [2024-07-12 11:24:03.044777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:1the state(5) to be set 00:19:37.116 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.116 [2024-07-12 11:24:03.044791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.116 [2024-07-12 11:24:03.044793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.116 [2024-07-12 11:24:03.044804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.116 [2024-07-12 11:24:03.044809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.116 [2024-07-12 11:24:03.044817] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.116 [2024-07-12 11:24:03.044823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.116 [2024-07-12 11:24:03.044830] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.116 [2024-07-12 11:24:03.044838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:1[2024-07-12 11:24:03.044843] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.116 the state(5) to be set 00:19:37.116 [2024-07-12 11:24:03.044861] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with [2024-07-12 11:24:03.044861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:19:37.116 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.117 [2024-07-12 11:24:03.044883] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.117 [2024-07-12 11:24:03.044887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.117 [2024-07-12 11:24:03.044897] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.117 [2024-07-12 11:24:03.044902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.117 [2024-07-12 11:24:03.044910] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.117 [2024-07-12 11:24:03.044923] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.117 [2024-07-12 11:24:03.044926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.117 [2024-07-12 11:24:03.044936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.117 [2024-07-12 11:24:03.044940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.117 [2024-07-12 11:24:03.044949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.117 [2024-07-12 11:24:03.044955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.117 [2024-07-12 11:24:03.044961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.117 [2024-07-12 11:24:03.044969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.117 [2024-07-12 11:24:03.044974] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.117 [2024-07-12 11:24:03.044984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.117 [2024-07-12 11:24:03.044987] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.117 [2024-07-12 11:24:03.044998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.117 [2024-07-12 11:24:03.045000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.117 [2024-07-12 11:24:03.045012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:1[2024-07-12 11:24:03.045013] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.117 the state(5) to be set 00:19:37.117 [2024-07-12 11:24:03.045028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 11:24:03.045028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.117 the state(5) to be set 00:19:37.117 [2024-07-12 11:24:03.045044] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with [2024-07-12 11:24:03.045044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:1the state(5) to be set 00:19:37.117 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.117 [2024-07-12 11:24:03.045061] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with [2024-07-12 11:24:03.045062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:19:37.117 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.117 [2024-07-12 11:24:03.045076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fecda0 is same with the state(5) to be set 00:19:37.117 [2024-07-12 11:24:03.045079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.117 [2024-07-12 11:24:03.045093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.117 [2024-07-12 11:24:03.045108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.117 [2024-07-12 11:24:03.045121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.117 [2024-07-12 11:24:03.045136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.117 [2024-07-12 11:24:03.045149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.117 [2024-07-12 11:24:03.045164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.117 [2024-07-12 11:24:03.045177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.117 [2024-07-12 11:24:03.045191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.117 [2024-07-12 11:24:03.045205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.117 [2024-07-12 11:24:03.045219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.117 [2024-07-12 11:24:03.045232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.117 [2024-07-12 11:24:03.045247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.117 [2024-07-12 11:24:03.045260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.117 [2024-07-12 11:24:03.045275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.117 [2024-07-12 11:24:03.045288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.117 [2024-07-12 11:24:03.045303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.117 [2024-07-12 11:24:03.045316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.117 [2024-07-12 11:24:03.045330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.117 [2024-07-12 11:24:03.045344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.117 [2024-07-12 11:24:03.045358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.117 [2024-07-12 11:24:03.045375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.117 [2024-07-12 11:24:03.045391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.117 [2024-07-12 11:24:03.045405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.117 [2024-07-12 11:24:03.045441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:19:37.117 [2024-07-12 11:24:03.045517] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xeacec0 was disconnected and freed. reset controller. 00:19:37.117 [2024-07-12 11:24:03.045947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.117 [2024-07-12 11:24:03.045971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.117 [2024-07-12 11:24:03.045992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.117 [2024-07-12 11:24:03.046007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.117 [2024-07-12 11:24:03.046023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.117 [2024-07-12 11:24:03.046036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.117 [2024-07-12 11:24:03.046052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.117 [2024-07-12 11:24:03.046066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.117 [2024-07-12 11:24:03.046081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.117 [2024-07-12 11:24:03.046094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.117 [2024-07-12 11:24:03.046110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.117 [2024-07-12 11:24:03.046123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.117 [2024-07-12 11:24:03.046138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.117 [2024-07-12 11:24:03.046152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.117 [2024-07-12 11:24:03.046167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.117 [2024-07-12 11:24:03.046180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.117 [2024-07-12 11:24:03.046179] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.117 [2024-07-12 11:24:03.046195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.117 [2024-07-12 11:24:03.046205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.117 [2024-07-12 11:24:03.046219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.117 [2024-07-12 11:24:03.046221] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.117 [2024-07-12 11:24:03.046239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.117 [2024-07-12 11:24:03.046239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.117 [2024-07-12 11:24:03.046251] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.117 [2024-07-12 11:24:03.046255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.117 [2024-07-12 11:24:03.046264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.117 [2024-07-12 11:24:03.046270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.117 [2024-07-12 11:24:03.046277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.117 [2024-07-12 11:24:03.046291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with [2024-07-12 11:24:03.046291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:19:37.117 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.117 [2024-07-12 11:24:03.046306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.117 [2024-07-12 11:24:03.046310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.117 [2024-07-12 11:24:03.046319] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.117 [2024-07-12 11:24:03.046324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.118 [2024-07-12 11:24:03.046332] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.118 [2024-07-12 11:24:03.046340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.118 [2024-07-12 11:24:03.046347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.118 [2024-07-12 11:24:03.046355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.118 [2024-07-12 11:24:03.046359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.118 [2024-07-12 11:24:03.046370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128[2024-07-12 11:24:03.046372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.118 the state(5) to be set 00:19:37.118 [2024-07-12 11:24:03.046386] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with [2024-07-12 11:24:03.046386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:19:37.118 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.118 [2024-07-12 11:24:03.046401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.118 [2024-07-12 11:24:03.046405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.118 [2024-07-12 11:24:03.046416] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.118 [2024-07-12 11:24:03.046419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.118 [2024-07-12 11:24:03.046429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.118 [2024-07-12 11:24:03.046439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128[2024-07-12 11:24:03.046442] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.118 the state(5) to be set 00:19:37.118 [2024-07-12 11:24:03.046455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 11:24:03.046456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.118 the state(5) to be set 00:19:37.118 [2024-07-12 11:24:03.046473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with [2024-07-12 11:24:03.046473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128the state(5) to be set 00:19:37.118 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.118 [2024-07-12 11:24:03.046488] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with [2024-07-12 11:24:03.046489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:19:37.118 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.118 [2024-07-12 11:24:03.046502] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.118 [2024-07-12 11:24:03.046506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.118 [2024-07-12 11:24:03.046515] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.118 [2024-07-12 11:24:03.046520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.118 [2024-07-12 11:24:03.046529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.118 [2024-07-12 11:24:03.046536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.118 [2024-07-12 11:24:03.046542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.118 [2024-07-12 11:24:03.046550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.118 [2024-07-12 11:24:03.046555] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.118 [2024-07-12 11:24:03.046566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:12[2024-07-12 11:24:03.046568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.118 the state(5) to be set 00:19:37.118 [2024-07-12 11:24:03.046582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 11:24:03.046582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.118 the state(5) to be set 00:19:37.118 [2024-07-12 11:24:03.046600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with [2024-07-12 11:24:03.046600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:12the state(5) to be set 00:19:37.118 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.118 [2024-07-12 11:24:03.046614] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with [2024-07-12 11:24:03.046615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:19:37.118 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.118 [2024-07-12 11:24:03.046633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.118 [2024-07-12 11:24:03.046637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.118 [2024-07-12 11:24:03.046647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.118 [2024-07-12 11:24:03.046652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.118 [2024-07-12 11:24:03.046661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.118 [2024-07-12 11:24:03.046668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.118 [2024-07-12 11:24:03.046674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.118 [2024-07-12 11:24:03.046682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.118 [2024-07-12 11:24:03.046687] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.118 [2024-07-12 11:24:03.046698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.118 [2024-07-12 11:24:03.046700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.118 [2024-07-12 11:24:03.046712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.118 [2024-07-12 11:24:03.046716] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.118 [2024-07-12 11:24:03.046727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.118 [2024-07-12 11:24:03.046731] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.118 [2024-07-12 11:24:03.046741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.118 [2024-07-12 11:24:03.046744] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.118 [2024-07-12 11:24:03.046757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:12[2024-07-12 11:24:03.046758] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.118 the state(5) to be set 00:19:37.118 [2024-07-12 11:24:03.046772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 11:24:03.046773] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.118 the state(5) to be set 00:19:37.118 [2024-07-12 11:24:03.046788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.118 [2024-07-12 11:24:03.046790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.118 [2024-07-12 11:24:03.046800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.118 [2024-07-12 11:24:03.046809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.118 [2024-07-12 11:24:03.046813] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.118 [2024-07-12 11:24:03.046826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.118 [2024-07-12 11:24:03.046831] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.118 [2024-07-12 11:24:03.046840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.118 [2024-07-12 11:24:03.046845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.118 [2024-07-12 11:24:03.046856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:12[2024-07-12 11:24:03.046858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.118 the state(5) to be set 00:19:37.118 [2024-07-12 11:24:03.046884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 11:24:03.046885] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.118 the state(5) to be set 00:19:37.118 [2024-07-12 11:24:03.046900] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.118 [2024-07-12 11:24:03.046903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.118 [2024-07-12 11:24:03.046913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.118 [2024-07-12 11:24:03.046927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with [2024-07-12 11:24:03.046928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:19:37.118 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.118 [2024-07-12 11:24:03.046940] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.118 [2024-07-12 11:24:03.046944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.118 [2024-07-12 11:24:03.046953] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.118 [2024-07-12 11:24:03.046959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.118 [2024-07-12 11:24:03.046966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.118 [2024-07-12 11:24:03.046974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.118 [2024-07-12 11:24:03.046978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.118 [2024-07-12 11:24:03.046988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.118 [2024-07-12 11:24:03.046991] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.118 [2024-07-12 11:24:03.047003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.119 [2024-07-12 11:24:03.047004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.119 [2024-07-12 11:24:03.047015] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.119 [2024-07-12 11:24:03.047018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.119 [2024-07-12 11:24:03.047027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.119 [2024-07-12 11:24:03.047038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:12[2024-07-12 11:24:03.047040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.119 the state(5) to be set 00:19:37.119 [2024-07-12 11:24:03.047053] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with [2024-07-12 11:24:03.047053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:19:37.119 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.119 [2024-07-12 11:24:03.047067] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fed260 is same with the state(5) to be set 00:19:37.119 [2024-07-12 11:24:03.047072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.119 [2024-07-12 11:24:03.047086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.119 [2024-07-12 11:24:03.047100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.119 [2024-07-12 11:24:03.047114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.119 [2024-07-12 11:24:03.047129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.119 [2024-07-12 11:24:03.047143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.119 [2024-07-12 11:24:03.047158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.119 [2024-07-12 11:24:03.047178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.119 [2024-07-12 11:24:03.047194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.119 [2024-07-12 11:24:03.047207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.119 [2024-07-12 11:24:03.047222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.119 [2024-07-12 11:24:03.047236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.119 [2024-07-12 11:24:03.047251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.119 [2024-07-12 11:24:03.047265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.119 [2024-07-12 11:24:03.047280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.119 [2024-07-12 11:24:03.047294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.119 [2024-07-12 11:24:03.047310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.119 [2024-07-12 11:24:03.047329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.119 [2024-07-12 11:24:03.047344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.119 [2024-07-12 11:24:03.047361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.119 [2024-07-12 11:24:03.047388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.119 [2024-07-12 11:24:03.047401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.119 [2024-07-12 11:24:03.047418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.119 [2024-07-12 11:24:03.047432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.119 [2024-07-12 11:24:03.047456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.119 [2024-07-12 11:24:03.047469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.119 [2024-07-12 11:24:03.047484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.119 [2024-07-12 11:24:03.047498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.119 [2024-07-12 11:24:03.047513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.119 [2024-07-12 11:24:03.047526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.119 [2024-07-12 11:24:03.047541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.119 [2024-07-12 11:24:03.047554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.119 [2024-07-12 11:24:03.047570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.119 [2024-07-12 11:24:03.047583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.119 [2024-07-12 11:24:03.047598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.119 [2024-07-12 11:24:03.047611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.119 [2024-07-12 11:24:03.047626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.119 [2024-07-12 11:24:03.047640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.119 [2024-07-12 11:24:03.047655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.119 [2024-07-12 11:24:03.047668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.119 [2024-07-12 11:24:03.047683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.119 [2024-07-12 11:24:03.047697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.119 [2024-07-12 11:24:03.047712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.119 [2024-07-12 11:24:03.047725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.119 [2024-07-12 11:24:03.047740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.119 [2024-07-12 11:24:03.047757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.119 [2024-07-12 11:24:03.047772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.119 [2024-07-12 11:24:03.047786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.119 [2024-07-12 11:24:03.047801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.119 [2024-07-12 11:24:03.047809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with [2024-07-12 11:24:03.047819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:19:37.119 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.119 [2024-07-12 11:24:03.047836] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with [2024-07-12 11:24:03.047837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:12the state(5) to be set 00:19:37.119 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.119 [2024-07-12 11:24:03.047851] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with [2024-07-12 11:24:03.047853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:19:37.119 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.119 [2024-07-12 11:24:03.047871] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.119 [2024-07-12 11:24:03.047877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.119 [2024-07-12 11:24:03.047888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.119 [2024-07-12 11:24:03.047893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.119 [2024-07-12 11:24:03.047909] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.119 [2024-07-12 11:24:03.047912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.119 [2024-07-12 11:24:03.047922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.119 [2024-07-12 11:24:03.047925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.119 [2024-07-12 11:24:03.047935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.119 [2024-07-12 11:24:03.047941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.119 [2024-07-12 11:24:03.047949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.119 [2024-07-12 11:24:03.047955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.120 [2024-07-12 11:24:03.047962] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.047970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.120 [2024-07-12 11:24:03.047974] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.047984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.120 [2024-07-12 11:24:03.047991] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048004] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048043] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048055] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048062] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xda22f0 was disconnected and fre[2024-07-12 11:24:03.048066] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with ed. reset controller. 00:19:37.120 the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048081] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048093] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048107] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048193] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048217] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048267] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048294] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048318] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048349] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048389] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048413] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048425] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048484] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048496] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048507] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048531] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048569] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048581] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048593] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048629] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048641] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048653] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.048665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce940 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.049586] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:19:37.120 [2024-07-12 11:24:03.049653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x755610 (9): Bad file descriptor 00:19:37.120 [2024-07-12 11:24:03.051276] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:19:37.120 [2024-07-12 11:24:03.051309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1f240 (9): Bad file descriptor 00:19:37.120 [2024-07-12 11:24:03.051378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.120 [2024-07-12 11:24:03.051400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.120 [2024-07-12 11:24:03.051416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.120 [2024-07-12 11:24:03.051429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.120 [2024-07-12 11:24:03.051443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.120 [2024-07-12 11:24:03.051456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.120 [2024-07-12 11:24:03.051470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.120 [2024-07-12 11:24:03.051483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.120 [2024-07-12 11:24:03.051496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17990 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.051526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcef600 (9): Bad file descriptor 00:19:37.120 [2024-07-12 11:24:03.051577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.120 [2024-07-12 11:24:03.051597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.120 [2024-07-12 11:24:03.051611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.120 [2024-07-12 11:24:03.051624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.120 [2024-07-12 11:24:03.051638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.120 [2024-07-12 11:24:03.051651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.120 [2024-07-12 11:24:03.051671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.120 [2024-07-12 11:24:03.051685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.120 [2024-07-12 11:24:03.051697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1d350 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.051742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.120 [2024-07-12 11:24:03.051762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.120 [2024-07-12 11:24:03.051776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.120 [2024-07-12 11:24:03.051790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.120 [2024-07-12 11:24:03.051808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.120 [2024-07-12 11:24:03.051822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.120 [2024-07-12 11:24:03.051836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.120 [2024-07-12 11:24:03.051849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.120 [2024-07-12 11:24:03.051862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17bb0 is same with the state(5) to be set 00:19:37.120 [2024-07-12 11:24:03.051925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.121 [2024-07-12 11:24:03.051946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.121 [2024-07-12 11:24:03.051972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.121 [2024-07-12 11:24:03.051991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.121 [2024-07-12 11:24:03.052005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.121 [2024-07-12 11:24:03.052018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.121 [2024-07-12 11:24:03.052032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.121 [2024-07-12 11:24:03.052045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.121 [2024-07-12 11:24:03.052057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc76280 is same with the state(5) to be set 00:19:37.121 [2024-07-12 11:24:03.052086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc75c60 (9): Bad file descriptor 00:19:37.121 [2024-07-12 11:24:03.052117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc7f450 (9): Bad file descriptor 00:19:37.121 [2024-07-12 11:24:03.052147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc53830 (9): Bad file descriptor 00:19:37.121 [2024-07-12 11:24:03.053017] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:37.121 [2024-07-12 11:24:03.053171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:37.121 [2024-07-12 11:24:03.053200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x755610 with addr=10.0.0.2, port=4420 00:19:37.121 [2024-07-12 11:24:03.053216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x755610 is same with the state(5) to be set 00:19:37.121 [2024-07-12 11:24:03.053288] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:37.121 [2024-07-12 11:24:03.053620] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:37.121 [2024-07-12 11:24:03.053690] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:37.121 [2024-07-12 11:24:03.053756] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:37.121 [2024-07-12 11:24:03.054089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:37.121 [2024-07-12 11:24:03.054117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1f240 with addr=10.0.0.2, port=4420 00:19:37.121 [2024-07-12 11:24:03.054133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1f240 is same with the state(5) to be set 00:19:37.121 [2024-07-12 11:24:03.054157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x755610 (9): Bad file descriptor 00:19:37.121 [2024-07-12 11:24:03.054288] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:37.121 [2024-07-12 11:24:03.054355] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:37.121 [2024-07-12 11:24:03.054420] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:37.121 [2024-07-12 11:24:03.054456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1f240 (9): Bad file descriptor 00:19:37.121 [2024-07-12 11:24:03.054478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:19:37.121 [2024-07-12 11:24:03.054491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:19:37.121 [2024-07-12 11:24:03.054506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:19:37.121 [2024-07-12 11:24:03.054600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:37.121 [2024-07-12 11:24:03.054622] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:37.121 [2024-07-12 11:24:03.054635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:19:37.121 [2024-07-12 11:24:03.054648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:37.121 [2024-07-12 11:24:03.054708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:37.121 [2024-07-12 11:24:03.061336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd17990 (9): Bad file descriptor 00:19:37.121 [2024-07-12 11:24:03.061420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1d350 (9): Bad file descriptor 00:19:37.121 [2024-07-12 11:24:03.061456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd17bb0 (9): Bad file descriptor 00:19:37.121 [2024-07-12 11:24:03.061487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc76280 (9): Bad file descriptor 00:19:37.121 [2024-07-12 11:24:03.061680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.121 [2024-07-12 11:24:03.061706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.121 [2024-07-12 11:24:03.061737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.121 [2024-07-12 11:24:03.061753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.121 [2024-07-12 11:24:03.061770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.121 [2024-07-12 11:24:03.061785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.121 [2024-07-12 11:24:03.061801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.121 [2024-07-12 11:24:03.061815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.121 [2024-07-12 11:24:03.061831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.121 [2024-07-12 11:24:03.061844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.121 [2024-07-12 11:24:03.061860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.121 [2024-07-12 11:24:03.061882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.121 [2024-07-12 11:24:03.061920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.121 [2024-07-12 11:24:03.061935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.121 [2024-07-12 11:24:03.061950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.121 [2024-07-12 11:24:03.061964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.121 [2024-07-12 11:24:03.061980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.121 [2024-07-12 11:24:03.061994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.121 [2024-07-12 11:24:03.062010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.121 [2024-07-12 11:24:03.062023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.121 [2024-07-12 11:24:03.062039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.121 [2024-07-12 11:24:03.062053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.121 [2024-07-12 11:24:03.062068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.121 [2024-07-12 11:24:03.062081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.121 [2024-07-12 11:24:03.062097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.121 [2024-07-12 11:24:03.062110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.121 [2024-07-12 11:24:03.062126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.121 [2024-07-12 11:24:03.062139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.121 [2024-07-12 11:24:03.062155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.121 [2024-07-12 11:24:03.062168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.121 [2024-07-12 11:24:03.062183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.121 [2024-07-12 11:24:03.062205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.121 [2024-07-12 11:24:03.062221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.121 [2024-07-12 11:24:03.062234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.121 [2024-07-12 11:24:03.062250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.121 [2024-07-12 11:24:03.062263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.121 [2024-07-12 11:24:03.062281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.121 [2024-07-12 11:24:03.062298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.121 [2024-07-12 11:24:03.062314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.121 [2024-07-12 11:24:03.062328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.121 [2024-07-12 11:24:03.062344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.121 [2024-07-12 11:24:03.062358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.121 [2024-07-12 11:24:03.062373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.121 [2024-07-12 11:24:03.062387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.121 [2024-07-12 11:24:03.062402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.121 [2024-07-12 11:24:03.062416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.121 [2024-07-12 11:24:03.062432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.121 [2024-07-12 11:24:03.062445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.121 [2024-07-12 11:24:03.062461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.121 [2024-07-12 11:24:03.062475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.121 [2024-07-12 11:24:03.062490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.121 [2024-07-12 11:24:03.062504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.062520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.122 [2024-07-12 11:24:03.062534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.062549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.122 [2024-07-12 11:24:03.062563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.062578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.122 [2024-07-12 11:24:03.062592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.062607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.122 [2024-07-12 11:24:03.062621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.062636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.122 [2024-07-12 11:24:03.062650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.062674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.122 [2024-07-12 11:24:03.062688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.062704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.122 [2024-07-12 11:24:03.062717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.062733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.122 [2024-07-12 11:24:03.062746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.062762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.122 [2024-07-12 11:24:03.062777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.062792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.122 [2024-07-12 11:24:03.062806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.062822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.122 [2024-07-12 11:24:03.062835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.062851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.122 [2024-07-12 11:24:03.062871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.062889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.122 [2024-07-12 11:24:03.062903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.062925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.122 [2024-07-12 11:24:03.062939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.062954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.122 [2024-07-12 11:24:03.062969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.062984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.122 [2024-07-12 11:24:03.062998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.063014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.122 [2024-07-12 11:24:03.063027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.063043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.122 [2024-07-12 11:24:03.063061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.063077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.122 [2024-07-12 11:24:03.063091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.063106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.122 [2024-07-12 11:24:03.063120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.063136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.122 [2024-07-12 11:24:03.063150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.063165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.122 [2024-07-12 11:24:03.063181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.063196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.122 [2024-07-12 11:24:03.063210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.063225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.122 [2024-07-12 11:24:03.063239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.063255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.122 [2024-07-12 11:24:03.063269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.063284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.122 [2024-07-12 11:24:03.063298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.063314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.122 [2024-07-12 11:24:03.063328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.063345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.122 [2024-07-12 11:24:03.063359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.063375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.122 [2024-07-12 11:24:03.063388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.063404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.122 [2024-07-12 11:24:03.063417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.063436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.122 [2024-07-12 11:24:03.063450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.063467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.122 [2024-07-12 11:24:03.063481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.063496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.122 [2024-07-12 11:24:03.063511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.063526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.122 [2024-07-12 11:24:03.063540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.063556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.122 [2024-07-12 11:24:03.063569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.063584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.122 [2024-07-12 11:24:03.063598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.063613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.122 [2024-07-12 11:24:03.063627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.063643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.122 [2024-07-12 11:24:03.063656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.063670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7d70 is same with the state(5) to be set 00:19:37.122 [2024-07-12 11:24:03.064981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.122 [2024-07-12 11:24:03.065005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.065026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.122 [2024-07-12 11:24:03.065042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.065058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.122 [2024-07-12 11:24:03.065072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.065087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.122 [2024-07-12 11:24:03.065100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.122 [2024-07-12 11:24:03.065120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.123 [2024-07-12 11:24:03.065135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.123 [2024-07-12 11:24:03.065150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.123 [2024-07-12 11:24:03.065164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.123 [2024-07-12 11:24:03.065179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.123 [2024-07-12 11:24:03.065194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.123 [2024-07-12 11:24:03.065209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.123 [2024-07-12 11:24:03.065223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.123 [2024-07-12 11:24:03.065238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.123 [2024-07-12 11:24:03.065251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.123 [2024-07-12 11:24:03.065266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.123 [2024-07-12 11:24:03.065280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.123 [2024-07-12 11:24:03.065296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.123 [2024-07-12 11:24:03.065309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.123 [2024-07-12 11:24:03.065324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.123 [2024-07-12 11:24:03.065338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.123 [2024-07-12 11:24:03.065354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.123 [2024-07-12 11:24:03.065367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.123 [2024-07-12 11:24:03.065382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.123 [2024-07-12 11:24:03.065396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.123 [2024-07-12 11:24:03.065411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.123 [2024-07-12 11:24:03.065426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.123 [2024-07-12 11:24:03.065441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.123 [2024-07-12 11:24:03.065455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.123 [2024-07-12 11:24:03.065471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.123 [2024-07-12 11:24:03.065488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.123 [2024-07-12 11:24:03.065504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.123 [2024-07-12 11:24:03.065518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.123 [2024-07-12 11:24:03.065533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.123 [2024-07-12 11:24:03.065547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.123 [2024-07-12 11:24:03.065562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.123 [2024-07-12 11:24:03.065576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.123 [2024-07-12 11:24:03.065592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.123 [2024-07-12 11:24:03.065607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.123 [2024-07-12 11:24:03.065622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.123 [2024-07-12 11:24:03.065636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.123 [2024-07-12 11:24:03.065652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.123 [2024-07-12 11:24:03.065666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.123 [2024-07-12 11:24:03.065681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.123 [2024-07-12 11:24:03.065695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.123 [2024-07-12 11:24:03.065711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.123 [2024-07-12 11:24:03.065724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.123 [2024-07-12 11:24:03.065740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.123 [2024-07-12 11:24:03.065754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.123 [2024-07-12 11:24:03.065769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.123 [2024-07-12 11:24:03.065783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.123 [2024-07-12 11:24:03.065799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.123 [2024-07-12 11:24:03.065813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.123 [2024-07-12 11:24:03.065828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.123 [2024-07-12 11:24:03.065842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.123 [2024-07-12 11:24:03.065862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.123 [2024-07-12 11:24:03.065884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.123 [2024-07-12 11:24:03.065901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.123 [2024-07-12 11:24:03.065918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.123 [2024-07-12 11:24:03.065934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.123 [2024-07-12 11:24:03.065948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.123 [2024-07-12 11:24:03.065964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.123 [2024-07-12 11:24:03.065978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.123 [2024-07-12 11:24:03.065993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.123 [2024-07-12 11:24:03.066007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.123 [2024-07-12 11:24:03.066023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.123 [2024-07-12 11:24:03.066036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.123 [2024-07-12 11:24:03.066052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.123 [2024-07-12 11:24:03.066066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.123 [2024-07-12 11:24:03.066081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.123 [2024-07-12 11:24:03.066094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.123 [2024-07-12 11:24:03.066109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.123 [2024-07-12 11:24:03.066123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.123 [2024-07-12 11:24:03.066139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.123 [2024-07-12 11:24:03.066153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.123 [2024-07-12 11:24:03.066177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.123 [2024-07-12 11:24:03.066190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.123 [2024-07-12 11:24:03.066205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.123 [2024-07-12 11:24:03.066219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.123 [2024-07-12 11:24:03.066238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.066256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.124 [2024-07-12 11:24:03.066272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.066286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.124 [2024-07-12 11:24:03.066302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.066316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.124 [2024-07-12 11:24:03.066331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.066345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.124 [2024-07-12 11:24:03.066360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.066373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.124 [2024-07-12 11:24:03.066389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.066403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.124 [2024-07-12 11:24:03.066418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.066432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.124 [2024-07-12 11:24:03.066447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.066461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.124 [2024-07-12 11:24:03.066477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.066490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.124 [2024-07-12 11:24:03.066506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.066519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.124 [2024-07-12 11:24:03.066536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.066550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.124 [2024-07-12 11:24:03.066565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.066579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.124 [2024-07-12 11:24:03.066595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.066608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.124 [2024-07-12 11:24:03.066628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.066642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.124 [2024-07-12 11:24:03.066658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.066672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.124 [2024-07-12 11:24:03.066687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.066701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.124 [2024-07-12 11:24:03.066716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.066730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.124 [2024-07-12 11:24:03.066746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.066760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.124 [2024-07-12 11:24:03.066776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.066789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.124 [2024-07-12 11:24:03.066805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.066818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.124 [2024-07-12 11:24:03.066834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.066848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.124 [2024-07-12 11:24:03.066863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.066888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.124 [2024-07-12 11:24:03.066905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.066924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.124 [2024-07-12 11:24:03.066939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3780 is same with the state(5) to be set 00:19:37.124 [2024-07-12 11:24:03.068201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.068223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.124 [2024-07-12 11:24:03.068244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.068259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.124 [2024-07-12 11:24:03.068279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.068295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.124 [2024-07-12 11:24:03.068311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.068325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.124 [2024-07-12 11:24:03.068341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.068355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.124 [2024-07-12 11:24:03.068370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.068384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.124 [2024-07-12 11:24:03.068399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.068413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.124 [2024-07-12 11:24:03.068429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.068443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.124 [2024-07-12 11:24:03.068458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.068472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.124 [2024-07-12 11:24:03.068488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.068501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.124 [2024-07-12 11:24:03.068517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.068531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.124 [2024-07-12 11:24:03.068546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.068560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.124 [2024-07-12 11:24:03.068575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.068589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.124 [2024-07-12 11:24:03.068604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.068617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.124 [2024-07-12 11:24:03.068632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.068650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.124 [2024-07-12 11:24:03.068666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.068680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.124 [2024-07-12 11:24:03.068696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.068709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.124 [2024-07-12 11:24:03.068725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.068738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.124 [2024-07-12 11:24:03.068754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.068767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.124 [2024-07-12 11:24:03.068783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.124 [2024-07-12 11:24:03.068796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.068812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.125 [2024-07-12 11:24:03.068825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.068841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.125 [2024-07-12 11:24:03.068854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.068876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.125 [2024-07-12 11:24:03.068891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.068907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.125 [2024-07-12 11:24:03.068921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.068937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.125 [2024-07-12 11:24:03.068950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.068966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.125 [2024-07-12 11:24:03.068980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.068995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.125 [2024-07-12 11:24:03.069009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.069028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.125 [2024-07-12 11:24:03.069042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.069058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.125 [2024-07-12 11:24:03.069072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.069087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.125 [2024-07-12 11:24:03.069101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.069116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.125 [2024-07-12 11:24:03.069130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.069146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.125 [2024-07-12 11:24:03.069159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.069174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.125 [2024-07-12 11:24:03.069196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.069212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.125 [2024-07-12 11:24:03.069225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.069241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.125 [2024-07-12 11:24:03.069255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.069270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.125 [2024-07-12 11:24:03.069283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.069299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.125 [2024-07-12 11:24:03.069313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.069328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.125 [2024-07-12 11:24:03.069341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.069357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.125 [2024-07-12 11:24:03.069370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.069385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.125 [2024-07-12 11:24:03.069402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.069418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.125 [2024-07-12 11:24:03.069432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.069447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.125 [2024-07-12 11:24:03.069460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.069476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.125 [2024-07-12 11:24:03.069489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.069504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.125 [2024-07-12 11:24:03.069518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.069533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.125 [2024-07-12 11:24:03.069546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.069562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.125 [2024-07-12 11:24:03.069575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.069590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.125 [2024-07-12 11:24:03.069604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.069619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.125 [2024-07-12 11:24:03.069632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.069647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.125 [2024-07-12 11:24:03.069661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.069676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.125 [2024-07-12 11:24:03.069690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.069705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.125 [2024-07-12 11:24:03.069718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.069733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.125 [2024-07-12 11:24:03.069747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.069762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.125 [2024-07-12 11:24:03.069779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.069795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.125 [2024-07-12 11:24:03.069808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.069823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.125 [2024-07-12 11:24:03.069837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.069851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.125 [2024-07-12 11:24:03.069870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.069887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.125 [2024-07-12 11:24:03.069901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.069923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.125 [2024-07-12 11:24:03.069937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.069952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.125 [2024-07-12 11:24:03.069965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.069980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.125 [2024-07-12 11:24:03.069994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.070009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.125 [2024-07-12 11:24:03.070022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.070038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.125 [2024-07-12 11:24:03.070051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.125 [2024-07-12 11:24:03.070066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.070080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.126 [2024-07-12 11:24:03.070095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.070108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.126 [2024-07-12 11:24:03.070122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc4eb50 is same with the state(5) to be set 00:19:37.126 [2024-07-12 11:24:03.071417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.071444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.126 [2024-07-12 11:24:03.071466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.071481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.126 [2024-07-12 11:24:03.071498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.071511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.126 [2024-07-12 11:24:03.071527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.071540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.126 [2024-07-12 11:24:03.071556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.071569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.126 [2024-07-12 11:24:03.071584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.071598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.126 [2024-07-12 11:24:03.071612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.071627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.126 [2024-07-12 11:24:03.071643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.071656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.126 [2024-07-12 11:24:03.071672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.071685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.126 [2024-07-12 11:24:03.071700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.071714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.126 [2024-07-12 11:24:03.071729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.071743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.126 [2024-07-12 11:24:03.071758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.071771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.126 [2024-07-12 11:24:03.071787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.071800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.126 [2024-07-12 11:24:03.071820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.071834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.126 [2024-07-12 11:24:03.071850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.071878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.126 [2024-07-12 11:24:03.071897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.071911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.126 [2024-07-12 11:24:03.071926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.071940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.126 [2024-07-12 11:24:03.071956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.071970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.126 [2024-07-12 11:24:03.071985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.071999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.126 [2024-07-12 11:24:03.072015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.072029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.126 [2024-07-12 11:24:03.072044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.072058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.126 [2024-07-12 11:24:03.072073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.072087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.126 [2024-07-12 11:24:03.072102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.072115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.126 [2024-07-12 11:24:03.072131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.072144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.126 [2024-07-12 11:24:03.072160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.072174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.126 [2024-07-12 11:24:03.072190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.072211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.126 [2024-07-12 11:24:03.072228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.072242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.126 [2024-07-12 11:24:03.072257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.072271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.126 [2024-07-12 11:24:03.072287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.072301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.126 [2024-07-12 11:24:03.072316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.072330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.126 [2024-07-12 11:24:03.072345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.072359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.126 [2024-07-12 11:24:03.072374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.072387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.126 [2024-07-12 11:24:03.072402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.072416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.126 [2024-07-12 11:24:03.072431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.072445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.126 [2024-07-12 11:24:03.072460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.072473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.126 [2024-07-12 11:24:03.072489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.072502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.126 [2024-07-12 11:24:03.072517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.072530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.126 [2024-07-12 11:24:03.072545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.072567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.126 [2024-07-12 11:24:03.072585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.072599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.126 [2024-07-12 11:24:03.072614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.072628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.126 [2024-07-12 11:24:03.072643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.126 [2024-07-12 11:24:03.072656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.127 [2024-07-12 11:24:03.072671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.127 [2024-07-12 11:24:03.072685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.127 [2024-07-12 11:24:03.072700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.127 [2024-07-12 11:24:03.072713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.127 [2024-07-12 11:24:03.072729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.127 [2024-07-12 11:24:03.072743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.127 [2024-07-12 11:24:03.072758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.127 [2024-07-12 11:24:03.072772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.127 [2024-07-12 11:24:03.072787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.127 [2024-07-12 11:24:03.072800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.127 [2024-07-12 11:24:03.072815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.127 [2024-07-12 11:24:03.072828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.127 [2024-07-12 11:24:03.072843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.127 [2024-07-12 11:24:03.072857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.127 [2024-07-12 11:24:03.072881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.127 [2024-07-12 11:24:03.072896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.127 [2024-07-12 11:24:03.072911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.127 [2024-07-12 11:24:03.072924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.127 [2024-07-12 11:24:03.072939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.127 [2024-07-12 11:24:03.072957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.127 [2024-07-12 11:24:03.072973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.127 [2024-07-12 11:24:03.072986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.127 [2024-07-12 11:24:03.073002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.127 [2024-07-12 11:24:03.073015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.127 [2024-07-12 11:24:03.073030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.127 [2024-07-12 11:24:03.073044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.127 [2024-07-12 11:24:03.073059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.127 [2024-07-12 11:24:03.073074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.127 [2024-07-12 11:24:03.073088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.127 [2024-07-12 11:24:03.073102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.127 [2024-07-12 11:24:03.073117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.127 [2024-07-12 11:24:03.073130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.127 [2024-07-12 11:24:03.073145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.127 [2024-07-12 11:24:03.073159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.127 [2024-07-12 11:24:03.073174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.127 [2024-07-12 11:24:03.073188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.127 [2024-07-12 11:24:03.073203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.127 [2024-07-12 11:24:03.073216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.127 [2024-07-12 11:24:03.073232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.127 [2024-07-12 11:24:03.073245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.127 [2024-07-12 11:24:03.073260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.127 [2024-07-12 11:24:03.073274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.127 [2024-07-12 11:24:03.073289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.127 [2024-07-12 11:24:03.073303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.127 [2024-07-12 11:24:03.073322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.127 [2024-07-12 11:24:03.073336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.127 [2024-07-12 11:24:03.073351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb1fc0 is same with the state(5) to be set 00:19:37.127 [2024-07-12 11:24:03.075806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:37.127 [2024-07-12 11:24:03.075840] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:19:37.127 [2024-07-12 11:24:03.075860] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:19:37.127 [2024-07-12 11:24:03.075975] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:37.127 [2024-07-12 11:24:03.076025] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:37.127 [2024-07-12 11:24:03.076127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:19:37.127 [2024-07-12 11:24:03.076151] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:19:37.127 [2024-07-12 11:24:03.076377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:37.127 [2024-07-12 11:24:03.076406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc53830 with addr=10.0.0.2, port=4420 00:19:37.127 [2024-07-12 11:24:03.076423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc53830 is same with the state(5) to be set 00:19:37.127 [2024-07-12 11:24:03.076524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:37.127 [2024-07-12 11:24:03.076548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc7f450 with addr=10.0.0.2, port=4420 00:19:37.127 [2024-07-12 11:24:03.076564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7f450 is same with the state(5) to be set 00:19:37.127 [2024-07-12 11:24:03.076647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:37.127 [2024-07-12 11:24:03.076670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc75c60 with addr=10.0.0.2, port=4420 00:19:37.127 [2024-07-12 11:24:03.076685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc75c60 is same with the state(5) to be set 00:19:37.127 [2024-07-12 11:24:03.077556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.127 [2024-07-12 11:24:03.077580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.127 [2024-07-12 11:24:03.077605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.127 [2024-07-12 11:24:03.077621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.127 [2024-07-12 11:24:03.077638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.127 [2024-07-12 11:24:03.077652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.127 [2024-07-12 11:24:03.077667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.127 [2024-07-12 11:24:03.077681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.127 [2024-07-12 11:24:03.077696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.127 [2024-07-12 11:24:03.077716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.127 [2024-07-12 11:24:03.077732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.127 [2024-07-12 11:24:03.077746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.127 [2024-07-12 11:24:03.077761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.127 [2024-07-12 11:24:03.077774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.077789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.077803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.077818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.077831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.077847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.077861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.077885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.077900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.077927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.077941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.077956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.077970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.077985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.077999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.078014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.078027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.078042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.078056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.078071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.078085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.078105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.078119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.078134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.078148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.078173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.078186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.078202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.078215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.078230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.078244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.078259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.078273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.078288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.078302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.078318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.078331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.078347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.078360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.078375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.078389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.078405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.078418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.078434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.078447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.078462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.078480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.078496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.078511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.078527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.078541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.078557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.078571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.078587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.078600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.078616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.078630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.078646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.078660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.078675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.078689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.078705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.078719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.078735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.078748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.078764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.078778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.078793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.078807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.078823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.078837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.078856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.078880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.078899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.078925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.078941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.078955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.078971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.078984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.079000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.079014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.079029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.079042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.079059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.079072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.128 [2024-07-12 11:24:03.079088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.128 [2024-07-12 11:24:03.079102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.129 [2024-07-12 11:24:03.079118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.129 [2024-07-12 11:24:03.079132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.129 [2024-07-12 11:24:03.079147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.129 [2024-07-12 11:24:03.079169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.129 [2024-07-12 11:24:03.079184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.129 [2024-07-12 11:24:03.079198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.129 [2024-07-12 11:24:03.079214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.129 [2024-07-12 11:24:03.079228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.129 [2024-07-12 11:24:03.079243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.129 [2024-07-12 11:24:03.079261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.129 [2024-07-12 11:24:03.079277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.129 [2024-07-12 11:24:03.079291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.129 [2024-07-12 11:24:03.079307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.129 [2024-07-12 11:24:03.079321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.129 [2024-07-12 11:24:03.079337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.129 [2024-07-12 11:24:03.079351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.129 [2024-07-12 11:24:03.079367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.129 [2024-07-12 11:24:03.079382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.129 [2024-07-12 11:24:03.079396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeab9f0 is same with the state(5) to be set 00:19:37.129 [2024-07-12 11:24:03.080638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.129 [2024-07-12 11:24:03.080661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.129 [2024-07-12 11:24:03.080683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.129 [2024-07-12 11:24:03.080698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.129 [2024-07-12 11:24:03.080714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.129 [2024-07-12 11:24:03.080728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.129 [2024-07-12 11:24:03.080745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.129 [2024-07-12 11:24:03.080759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.129 [2024-07-12 11:24:03.080774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.129 [2024-07-12 11:24:03.080788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.129 [2024-07-12 11:24:03.080803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.129 [2024-07-12 11:24:03.080817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.129 [2024-07-12 11:24:03.080834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.129 [2024-07-12 11:24:03.080848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.129 [2024-07-12 11:24:03.080863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.129 [2024-07-12 11:24:03.080906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.129 [2024-07-12 11:24:03.080923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.129 [2024-07-12 11:24:03.080937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.129 [2024-07-12 11:24:03.080952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.129 [2024-07-12 11:24:03.080966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.129 [2024-07-12 11:24:03.080982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.129 [2024-07-12 11:24:03.080996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.129 [2024-07-12 11:24:03.081012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.129 [2024-07-12 11:24:03.081026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.129 [2024-07-12 11:24:03.081041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.129 [2024-07-12 11:24:03.081054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.129 [2024-07-12 11:24:03.081070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.129 [2024-07-12 11:24:03.081083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.129 [2024-07-12 11:24:03.081098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.129 [2024-07-12 11:24:03.081112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.129 [2024-07-12 11:24:03.081128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.129 [2024-07-12 11:24:03.081142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.129 [2024-07-12 11:24:03.081163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.129 [2024-07-12 11:24:03.081177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.129 [2024-07-12 11:24:03.081192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.129 [2024-07-12 11:24:03.081206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.129 [2024-07-12 11:24:03.081222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.129 [2024-07-12 11:24:03.081235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.129 [2024-07-12 11:24:03.081251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.129 [2024-07-12 11:24:03.081265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.129 [2024-07-12 11:24:03.081284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.129 [2024-07-12 11:24:03.081298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.129 [2024-07-12 11:24:03.081314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.129 [2024-07-12 11:24:03.081328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.129 [2024-07-12 11:24:03.081344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.129 [2024-07-12 11:24:03.081357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.129 [2024-07-12 11:24:03.081373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.129 [2024-07-12 11:24:03.081386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.129 [2024-07-12 11:24:03.081402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.129 [2024-07-12 11:24:03.081416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.129 [2024-07-12 11:24:03.081431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.129 [2024-07-12 11:24:03.081445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.129 [2024-07-12 11:24:03.081460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.129 [2024-07-12 11:24:03.081473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.129 [2024-07-12 11:24:03.081489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.129 [2024-07-12 11:24:03.081502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.129 [2024-07-12 11:24:03.081518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.129 [2024-07-12 11:24:03.081531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.129 [2024-07-12 11:24:03.081547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.129 [2024-07-12 11:24:03.081560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.129 [2024-07-12 11:24:03.081577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.129 [2024-07-12 11:24:03.081591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.129 [2024-07-12 11:24:03.081607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.129 [2024-07-12 11:24:03.081620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.129 [2024-07-12 11:24:03.081636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.129 [2024-07-12 11:24:03.081650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.081669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.130 [2024-07-12 11:24:03.081683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.081700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.130 [2024-07-12 11:24:03.081714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.081729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.130 [2024-07-12 11:24:03.081742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.081759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.130 [2024-07-12 11:24:03.081772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.081788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.130 [2024-07-12 11:24:03.081802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.081818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.130 [2024-07-12 11:24:03.081832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.081848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.130 [2024-07-12 11:24:03.081862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.081887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.130 [2024-07-12 11:24:03.081910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.081926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.130 [2024-07-12 11:24:03.081941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.081956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.130 [2024-07-12 11:24:03.081970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.081986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.130 [2024-07-12 11:24:03.082000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.082015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.130 [2024-07-12 11:24:03.082029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.082046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.130 [2024-07-12 11:24:03.082064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.082080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.130 [2024-07-12 11:24:03.082094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.082119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.130 [2024-07-12 11:24:03.082133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.082149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.130 [2024-07-12 11:24:03.082163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.082178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.130 [2024-07-12 11:24:03.082192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.082207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.130 [2024-07-12 11:24:03.082221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.082236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.130 [2024-07-12 11:24:03.082250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.082267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.130 [2024-07-12 11:24:03.082281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.082297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.130 [2024-07-12 11:24:03.082311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.082327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.130 [2024-07-12 11:24:03.082341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.082357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.130 [2024-07-12 11:24:03.082371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.082387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.130 [2024-07-12 11:24:03.082401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.082417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.130 [2024-07-12 11:24:03.082431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.082450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.130 [2024-07-12 11:24:03.082465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.082480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.130 [2024-07-12 11:24:03.082494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.082510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.130 [2024-07-12 11:24:03.082524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.082539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.130 [2024-07-12 11:24:03.082553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.082569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.130 [2024-07-12 11:24:03.082583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.082599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.130 [2024-07-12 11:24:03.082612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.082627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeae390 is same with the state(5) to be set 00:19:37.130 [2024-07-12 11:24:03.083872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.130 [2024-07-12 11:24:03.083907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.083927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.130 [2024-07-12 11:24:03.083942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.083959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.130 [2024-07-12 11:24:03.083974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.083989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.130 [2024-07-12 11:24:03.084003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.084019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.130 [2024-07-12 11:24:03.084033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.084049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.130 [2024-07-12 11:24:03.084063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.084084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.130 [2024-07-12 11:24:03.084099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.084118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.130 [2024-07-12 11:24:03.084132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.084148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.130 [2024-07-12 11:24:03.084161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.084177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.130 [2024-07-12 11:24:03.084192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.084208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.130 [2024-07-12 11:24:03.084222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.130 [2024-07-12 11:24:03.084238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.131 [2024-07-12 11:24:03.084251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.131 [2024-07-12 11:24:03.084267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.131 [2024-07-12 11:24:03.084280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.131 [2024-07-12 11:24:03.084296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.131 [2024-07-12 11:24:03.084310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.131 [2024-07-12 11:24:03.084326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.131 [2024-07-12 11:24:03.084340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.131 [2024-07-12 11:24:03.084355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.131 [2024-07-12 11:24:03.084369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.131 [2024-07-12 11:24:03.084385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.131 [2024-07-12 11:24:03.084399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.131 [2024-07-12 11:24:03.084414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.131 [2024-07-12 11:24:03.084428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.131 [2024-07-12 11:24:03.084444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.131 [2024-07-12 11:24:03.084462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.131 [2024-07-12 11:24:03.084478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.131 [2024-07-12 11:24:03.084492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.131 [2024-07-12 11:24:03.084507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.131 [2024-07-12 11:24:03.084521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.131 [2024-07-12 11:24:03.084537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.131 [2024-07-12 11:24:03.084550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.131 [2024-07-12 11:24:03.084566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.131 [2024-07-12 11:24:03.084579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.131 [2024-07-12 11:24:03.084594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.131 [2024-07-12 11:24:03.084608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.131 [2024-07-12 11:24:03.084624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.131 [2024-07-12 11:24:03.084637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.131 [2024-07-12 11:24:03.084653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.131 [2024-07-12 11:24:03.084666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.131 [2024-07-12 11:24:03.084681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.131 [2024-07-12 11:24:03.084695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.131 [2024-07-12 11:24:03.084710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.131 [2024-07-12 11:24:03.084723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.131 [2024-07-12 11:24:03.084738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.131 [2024-07-12 11:24:03.084752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.131 [2024-07-12 11:24:03.084768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.131 [2024-07-12 11:24:03.084781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.131 [2024-07-12 11:24:03.084797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.131 [2024-07-12 11:24:03.084811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.131 [2024-07-12 11:24:03.084829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.131 [2024-07-12 11:24:03.084843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.131 [2024-07-12 11:24:03.084859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.131 [2024-07-12 11:24:03.084879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.131 [2024-07-12 11:24:03.084908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.131 [2024-07-12 11:24:03.084922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.131 [2024-07-12 11:24:03.084938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.131 [2024-07-12 11:24:03.084951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.131 [2024-07-12 11:24:03.084967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.131 [2024-07-12 11:24:03.084980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.131 [2024-07-12 11:24:03.084996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.131 [2024-07-12 11:24:03.085010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.131 [2024-07-12 11:24:03.085025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.131 [2024-07-12 11:24:03.085038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.131 [2024-07-12 11:24:03.085054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.131 [2024-07-12 11:24:03.085067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.131 [2024-07-12 11:24:03.085082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.131 [2024-07-12 11:24:03.085095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.131 [2024-07-12 11:24:03.085110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.131 [2024-07-12 11:24:03.085134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.131 [2024-07-12 11:24:03.085149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.131 [2024-07-12 11:24:03.085163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.131 [2024-07-12 11:24:03.085179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.131 [2024-07-12 11:24:03.085192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.131 [2024-07-12 11:24:03.085207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.131 [2024-07-12 11:24:03.085224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.131 [2024-07-12 11:24:03.085241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.131 [2024-07-12 11:24:03.085255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.131 [2024-07-12 11:24:03.085270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.131 [2024-07-12 11:24:03.085284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.131 [2024-07-12 11:24:03.085300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.131 [2024-07-12 11:24:03.085314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.131 [2024-07-12 11:24:03.085330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.131 [2024-07-12 11:24:03.085344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.131 [2024-07-12 11:24:03.085360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.132 [2024-07-12 11:24:03.085374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.132 [2024-07-12 11:24:03.085390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.132 [2024-07-12 11:24:03.085404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.132 [2024-07-12 11:24:03.085419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.132 [2024-07-12 11:24:03.085433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.132 [2024-07-12 11:24:03.085449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.132 [2024-07-12 11:24:03.085462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.132 [2024-07-12 11:24:03.085478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.132 [2024-07-12 11:24:03.085491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.132 [2024-07-12 11:24:03.085506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.132 [2024-07-12 11:24:03.085520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.132 [2024-07-12 11:24:03.085535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.132 [2024-07-12 11:24:03.085549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.132 [2024-07-12 11:24:03.085565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.132 [2024-07-12 11:24:03.085579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.132 [2024-07-12 11:24:03.085601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.132 [2024-07-12 11:24:03.085615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.132 [2024-07-12 11:24:03.085631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.132 [2024-07-12 11:24:03.085645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.132 [2024-07-12 11:24:03.085661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.132 [2024-07-12 11:24:03.085675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.132 [2024-07-12 11:24:03.085691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.132 [2024-07-12 11:24:03.085704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.132 [2024-07-12 11:24:03.085720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.132 [2024-07-12 11:24:03.085734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.132 [2024-07-12 11:24:03.085750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.132 [2024-07-12 11:24:03.085763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.132 [2024-07-12 11:24:03.085778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.132 [2024-07-12 11:24:03.085792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.132 [2024-07-12 11:24:03.085808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.132 [2024-07-12 11:24:03.085821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.132 [2024-07-12 11:24:03.085836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf860 is same with the state(5) to be set 00:19:37.132 [2024-07-12 11:24:03.087078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.132 [2024-07-12 11:24:03.087101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.132 [2024-07-12 11:24:03.087122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.132 [2024-07-12 11:24:03.087137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.132 [2024-07-12 11:24:03.087153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.132 [2024-07-12 11:24:03.087167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.132 [2024-07-12 11:24:03.087183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.132 [2024-07-12 11:24:03.087197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.132 [2024-07-12 11:24:03.087217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.132 [2024-07-12 11:24:03.087232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.132 [2024-07-12 11:24:03.087247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.132 [2024-07-12 11:24:03.087261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.132 [2024-07-12 11:24:03.087276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.132 [2024-07-12 11:24:03.087290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.132 [2024-07-12 11:24:03.087305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.132 [2024-07-12 11:24:03.087319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.132 [2024-07-12 11:24:03.087335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.132 [2024-07-12 11:24:03.087348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.132 [2024-07-12 11:24:03.087364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.132 [2024-07-12 11:24:03.087377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.132 [2024-07-12 11:24:03.087393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.132 [2024-07-12 11:24:03.087407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.132 [2024-07-12 11:24:03.087423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.132 [2024-07-12 11:24:03.087436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.132 [2024-07-12 11:24:03.087452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.132 [2024-07-12 11:24:03.087465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.132 [2024-07-12 11:24:03.087481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.132 [2024-07-12 11:24:03.087495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.132 [2024-07-12 11:24:03.087510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.132 [2024-07-12 11:24:03.087524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.132 [2024-07-12 11:24:03.087539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.132 [2024-07-12 11:24:03.087552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.132 [2024-07-12 11:24:03.087568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.132 [2024-07-12 11:24:03.087586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.132 [2024-07-12 11:24:03.087602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.132 [2024-07-12 11:24:03.087616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.132 [2024-07-12 11:24:03.087631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.132 [2024-07-12 11:24:03.087645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.132 [2024-07-12 11:24:03.087660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.132 [2024-07-12 11:24:03.087674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.132 [2024-07-12 11:24:03.087689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.132 [2024-07-12 11:24:03.087703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.132 [2024-07-12 11:24:03.087718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.132 [2024-07-12 11:24:03.087733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.132 [2024-07-12 11:24:03.087748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.132 [2024-07-12 11:24:03.087762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.132 [2024-07-12 11:24:03.087778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.132 [2024-07-12 11:24:03.087792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.132 [2024-07-12 11:24:03.087807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.132 [2024-07-12 11:24:03.087821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.132 [2024-07-12 11:24:03.087837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.132 [2024-07-12 11:24:03.087851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.132 [2024-07-12 11:24:03.087872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.133 [2024-07-12 11:24:03.087888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.133 [2024-07-12 11:24:03.087909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.133 [2024-07-12 11:24:03.087923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.133 [2024-07-12 11:24:03.087938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.133 [2024-07-12 11:24:03.087952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.133 [2024-07-12 11:24:03.087972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.133 [2024-07-12 11:24:03.087986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.133 [2024-07-12 11:24:03.088001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.133 [2024-07-12 11:24:03.088015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.133 [2024-07-12 11:24:03.088031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.133 [2024-07-12 11:24:03.088045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.133 [2024-07-12 11:24:03.088061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.133 [2024-07-12 11:24:03.088075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.133 [2024-07-12 11:24:03.088091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.133 [2024-07-12 11:24:03.088105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.133 [2024-07-12 11:24:03.088120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.133 [2024-07-12 11:24:03.088134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.133 [2024-07-12 11:24:03.088149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.133 [2024-07-12 11:24:03.088163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.133 [2024-07-12 11:24:03.088178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.133 [2024-07-12 11:24:03.088192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.133 [2024-07-12 11:24:03.088208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.133 [2024-07-12 11:24:03.088222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.133 [2024-07-12 11:24:03.088237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.133 [2024-07-12 11:24:03.088251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.133 [2024-07-12 11:24:03.088267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.133 [2024-07-12 11:24:03.088281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.133 [2024-07-12 11:24:03.088296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.133 [2024-07-12 11:24:03.088310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.133 [2024-07-12 11:24:03.088325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.133 [2024-07-12 11:24:03.088342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.133 [2024-07-12 11:24:03.088358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.133 [2024-07-12 11:24:03.088372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.133 [2024-07-12 11:24:03.088387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.133 [2024-07-12 11:24:03.088401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.133 [2024-07-12 11:24:03.088417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.133 [2024-07-12 11:24:03.088431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.133 [2024-07-12 11:24:03.088446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.133 [2024-07-12 11:24:03.088460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.133 [2024-07-12 11:24:03.088476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.133 [2024-07-12 11:24:03.088490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.133 [2024-07-12 11:24:03.088507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.133 [2024-07-12 11:24:03.088521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.133 [2024-07-12 11:24:03.088536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.133 [2024-07-12 11:24:03.088549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.133 [2024-07-12 11:24:03.088565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.133 [2024-07-12 11:24:03.088579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.133 [2024-07-12 11:24:03.088594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.133 [2024-07-12 11:24:03.088607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.133 [2024-07-12 11:24:03.088623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.133 [2024-07-12 11:24:03.088637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.133 [2024-07-12 11:24:03.088652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.133 [2024-07-12 11:24:03.088666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.133 [2024-07-12 11:24:03.088681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.133 [2024-07-12 11:24:03.088694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.133 [2024-07-12 11:24:03.088713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.133 [2024-07-12 11:24:03.088727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.133 [2024-07-12 11:24:03.088743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.133 [2024-07-12 11:24:03.088757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.133 [2024-07-12 11:24:03.088773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.133 [2024-07-12 11:24:03.088786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.133 [2024-07-12 11:24:03.088801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.133 [2024-07-12 11:24:03.088815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.133 [2024-07-12 11:24:03.088830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.133 [2024-07-12 11:24:03.088843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.133 [2024-07-12 11:24:03.088858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.133 [2024-07-12 11:24:03.088879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.133 [2024-07-12 11:24:03.088896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.133 [2024-07-12 11:24:03.088910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.133 [2024-07-12 11:24:03.088925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.133 [2024-07-12 11:24:03.088940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.133 [2024-07-12 11:24:03.088955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.133 [2024-07-12 11:24:03.088969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.133 [2024-07-12 11:24:03.088985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.133 [2024-07-12 11:24:03.088998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.133 [2024-07-12 11:24:03.089013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb0b10 is same with the state(5) to be set 00:19:37.133 [2024-07-12 11:24:03.090895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:19:37.133 [2024-07-12 11:24:03.090928] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:19:37.133 [2024-07-12 11:24:03.090947] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:19:37.133 [2024-07-12 11:24:03.090966] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:19:37.133 task offset: 16384 on job bdev=Nvme6n1 fails 00:19:37.133 00:19:37.133 Latency(us) 00:19:37.133 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.133 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:37.133 Job: Nvme1n1 ended in about 0.73 seconds with error 00:19:37.133 Verification LBA range: start 0x0 length 0x400 00:19:37.133 Nvme1n1 : 0.73 175.16 10.95 87.58 0.00 240254.42 20388.98 245444.46 00:19:37.133 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:37.133 Job: Nvme2n1 ended in about 0.72 seconds with error 00:19:37.133 Verification LBA range: start 0x0 length 0x400 00:19:37.133 Nvme2n1 : 0.72 178.57 11.16 89.29 0.00 229460.83 7233.23 254765.13 00:19:37.134 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:37.134 Job: Nvme3n1 ended in about 0.73 seconds with error 00:19:37.134 Verification LBA range: start 0x0 length 0x400 00:19:37.134 Nvme3n1 : 0.73 179.85 11.24 87.20 0.00 224476.40 13495.56 234570.33 00:19:37.134 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:37.134 Job: Nvme4n1 ended in about 0.74 seconds with error 00:19:37.134 Verification LBA range: start 0x0 length 0x400 00:19:37.134 Nvme4n1 : 0.74 173.64 10.85 86.82 0.00 224080.78 16505.36 257872.02 00:19:37.134 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:37.134 Job: Nvme5n1 ended in about 0.75 seconds with error 00:19:37.134 Verification LBA range: start 0x0 length 0x400 00:19:37.134 Nvme5n1 : 0.75 92.45 5.78 79.05 0.00 330576.02 20777.34 285834.05 00:19:37.134 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:37.134 Job: Nvme6n1 ended in about 0.72 seconds with error 00:19:37.134 Verification LBA range: start 0x0 length 0x400 00:19:37.134 Nvme6n1 : 0.72 178.96 11.18 89.48 0.00 204464.73 7815.77 253211.69 00:19:37.134 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:37.134 Job: Nvme7n1 ended in about 0.75 seconds with error 00:19:37.134 Verification LBA range: start 0x0 length 0x400 00:19:37.134 Nvme7n1 : 0.75 176.09 11.01 85.38 0.00 205447.43 19223.89 242337.56 00:19:37.134 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:37.134 Job: Nvme8n1 ended in about 0.75 seconds with error 00:19:37.134 Verification LBA range: start 0x0 length 0x400 00:19:37.134 Nvme8n1 : 0.75 170.03 10.63 85.01 0.00 204671.05 20680.25 251658.24 00:19:37.134 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:37.134 Job: Nvme9n1 ended in about 0.76 seconds with error 00:19:37.134 Verification LBA range: start 0x0 length 0x400 00:19:37.134 Nvme9n1 : 0.76 84.66 5.29 84.66 0.00 299882.57 41360.50 257872.02 00:19:37.134 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:37.134 Job: Nvme10n1 ended in about 0.74 seconds with error 00:19:37.134 Verification LBA range: start 0x0 length 0x400 00:19:37.134 Nvme10n1 : 0.74 86.45 5.40 86.45 0.00 283304.77 22719.15 274959.93 00:19:37.134 =================================================================================================================== 00:19:37.134 Total : 1495.85 93.49 860.91 0.00 237897.29 7233.23 285834.05 00:19:37.134 [2024-07-12 11:24:03.116828] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:37.134 [2024-07-12 11:24:03.117190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:37.134 [2024-07-12 11:24:03.117227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcef600 with addr=10.0.0.2, port=4420 00:19:37.134 [2024-07-12 11:24:03.117248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcef600 is same with the state(5) to be set 00:19:37.134 [2024-07-12 11:24:03.117345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:37.134 [2024-07-12 11:24:03.117370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x755610 with addr=10.0.0.2, port=4420 00:19:37.134 [2024-07-12 11:24:03.117387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x755610 is same with the state(5) to be set 00:19:37.134 [2024-07-12 11:24:03.117426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc53830 (9): Bad file descriptor 00:19:37.134 [2024-07-12 11:24:03.117450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc7f450 (9): Bad file descriptor 00:19:37.134 [2024-07-12 11:24:03.117469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc75c60 (9): Bad file descriptor 00:19:37.134 [2024-07-12 11:24:03.117522] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:37.134 [2024-07-12 11:24:03.117552] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:37.134 [2024-07-12 11:24:03.117573] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:37.134 [2024-07-12 11:24:03.117592] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:37.134 [2024-07-12 11:24:03.117613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x755610 (9): Bad file descriptor 00:19:37.134 [2024-07-12 11:24:03.117638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcef600 (9): Bad file descriptor 00:19:37.134 [2024-07-12 11:24:03.117782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:19:37.134 [2024-07-12 11:24:03.117970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:37.134 [2024-07-12 11:24:03.118005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1f240 with addr=10.0.0.2, port=4420 00:19:37.134 [2024-07-12 11:24:03.118021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1f240 is same with the state(5) to be set 00:19:37.134 [2024-07-12 11:24:03.118111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:37.134 [2024-07-12 11:24:03.118137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc76280 with addr=10.0.0.2, port=4420 00:19:37.134 [2024-07-12 11:24:03.118153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc76280 is same with the state(5) to be set 00:19:37.134 [2024-07-12 11:24:03.118236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:37.134 [2024-07-12 11:24:03.118261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd17bb0 with addr=10.0.0.2, port=4420 00:19:37.134 [2024-07-12 11:24:03.118277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17bb0 is same with the state(5) to be set 00:19:37.134 [2024-07-12 11:24:03.118369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:37.134 [2024-07-12 11:24:03.118393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd17990 with addr=10.0.0.2, port=4420 00:19:37.134 [2024-07-12 11:24:03.118408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17990 is same with the state(5) to be set 00:19:37.134 [2024-07-12 11:24:03.118426] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:37.134 [2024-07-12 11:24:03.118440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:37.134 [2024-07-12 11:24:03.118455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:37.134 [2024-07-12 11:24:03.118477] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:19:37.134 [2024-07-12 11:24:03.118492] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:19:37.134 [2024-07-12 11:24:03.118505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:19:37.134 [2024-07-12 11:24:03.118522] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:19:37.134 [2024-07-12 11:24:03.118535] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:19:37.134 [2024-07-12 11:24:03.118554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:19:37.134 [2024-07-12 11:24:03.118592] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:37.134 [2024-07-12 11:24:03.118615] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:37.134 [2024-07-12 11:24:03.118634] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:37.134 [2024-07-12 11:24:03.118652] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:37.134 [2024-07-12 11:24:03.118669] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:37.134 [2024-07-12 11:24:03.119833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:37.134 [2024-07-12 11:24:03.119858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:37.134 [2024-07-12 11:24:03.119897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:37.134 [2024-07-12 11:24:03.119973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:37.134 [2024-07-12 11:24:03.120007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1d350 with addr=10.0.0.2, port=4420 00:19:37.134 [2024-07-12 11:24:03.120023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1d350 is same with the state(5) to be set 00:19:37.134 [2024-07-12 11:24:03.120041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1f240 (9): Bad file descriptor 00:19:37.134 [2024-07-12 11:24:03.120061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc76280 (9): Bad file descriptor 00:19:37.134 [2024-07-12 11:24:03.120078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd17bb0 (9): Bad file descriptor 00:19:37.134 [2024-07-12 11:24:03.120096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd17990 (9): Bad file descriptor 00:19:37.134 [2024-07-12 11:24:03.120111] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:19:37.134 [2024-07-12 11:24:03.120124] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:19:37.134 [2024-07-12 11:24:03.120136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:19:37.134 [2024-07-12 11:24:03.120155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:19:37.134 [2024-07-12 11:24:03.120169] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:19:37.134 [2024-07-12 11:24:03.120182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:19:37.134 [2024-07-12 11:24:03.120256] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:37.134 [2024-07-12 11:24:03.120276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:37.134 [2024-07-12 11:24:03.120293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1d350 (9): Bad file descriptor 00:19:37.134 [2024-07-12 11:24:03.120309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:37.134 [2024-07-12 11:24:03.120322] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:19:37.134 [2024-07-12 11:24:03.120335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:37.134 [2024-07-12 11:24:03.120352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:19:37.134 [2024-07-12 11:24:03.120366] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:19:37.134 [2024-07-12 11:24:03.120384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:19:37.134 [2024-07-12 11:24:03.120400] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:19:37.134 [2024-07-12 11:24:03.120414] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:19:37.134 [2024-07-12 11:24:03.120426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:19:37.134 [2024-07-12 11:24:03.120442] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:19:37.134 [2024-07-12 11:24:03.120456] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:19:37.134 [2024-07-12 11:24:03.120468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:19:37.134 [2024-07-12 11:24:03.120535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:37.134 [2024-07-12 11:24:03.120555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:37.134 [2024-07-12 11:24:03.120567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:37.135 [2024-07-12 11:24:03.120577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:37.135 [2024-07-12 11:24:03.120589] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:19:37.135 [2024-07-12 11:24:03.120601] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:19:37.135 [2024-07-12 11:24:03.120613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:19:37.135 [2024-07-12 11:24:03.120650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:37.718 11:24:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:19:37.718 11:24:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:19:38.656 11:24:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 617397 00:19:38.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (617397) - No such process 00:19:38.656 11:24:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:19:38.656 11:24:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:19:38.656 11:24:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:38.656 11:24:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:38.656 11:24:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:38.656 11:24:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:38.656 11:24:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:38.656 11:24:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:19:38.656 11:24:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:38.656 11:24:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:19:38.656 11:24:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:38.656 11:24:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:38.656 rmmod nvme_tcp 00:19:38.656 rmmod nvme_fabrics 00:19:38.656 rmmod nvme_keyring 00:19:38.656 11:24:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:38.656 11:24:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:19:38.656 11:24:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:19:38.656 11:24:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:38.656 11:24:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:38.656 11:24:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:38.656 11:24:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:38.656 11:24:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:38.656 11:24:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:38.656 11:24:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.656 11:24:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:38.656 11:24:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.220 11:24:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:41.220 00:19:41.220 real 0m7.167s 00:19:41.220 user 0m16.622s 00:19:41.220 sys 0m1.374s 00:19:41.220 11:24:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:41.220 11:24:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:41.220 ************************************ 00:19:41.220 END TEST nvmf_shutdown_tc3 00:19:41.220 ************************************ 00:19:41.220 11:24:06 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:19:41.220 11:24:06 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:19:41.220 00:19:41.220 real 0m26.928s 00:19:41.220 user 1m13.962s 00:19:41.220 sys 0m6.077s 00:19:41.220 11:24:06 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:41.220 11:24:06 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:41.220 ************************************ 00:19:41.220 END TEST nvmf_shutdown 00:19:41.220 ************************************ 00:19:41.220 11:24:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:41.220 11:24:06 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:19:41.220 11:24:06 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:41.220 11:24:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:41.220 11:24:06 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:19:41.220 11:24:06 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:41.220 11:24:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:41.220 11:24:06 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:19:41.220 11:24:06 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:41.220 11:24:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:41.220 11:24:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:41.220 11:24:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:41.220 ************************************ 00:19:41.220 START TEST nvmf_multicontroller 00:19:41.220 ************************************ 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:41.220 * Looking for test storage... 00:19:41.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:19:41.220 11:24:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:43.126 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:43.126 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:43.126 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:43.127 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:43.127 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:43.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:43.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:19:43.127 00:19:43.127 --- 10.0.0.2 ping statistics --- 00:19:43.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.127 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:43.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:43.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:19:43.127 00:19:43.127 --- 10.0.0.1 ping statistics --- 00:19:43.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.127 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=620343 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 620343 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 620343 ']' 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:43.127 11:24:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:43.127 [2024-07-12 11:24:08.956575] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:19:43.127 [2024-07-12 11:24:08.956658] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:43.127 EAL: No free 2048 kB hugepages reported on node 1 00:19:43.127 [2024-07-12 11:24:09.030106] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:43.127 [2024-07-12 11:24:09.142503] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:43.127 [2024-07-12 11:24:09.142555] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:43.127 [2024-07-12 11:24:09.142584] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:43.127 [2024-07-12 11:24:09.142595] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:43.127 [2024-07-12 11:24:09.142605] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:43.127 [2024-07-12 11:24:09.142754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:43.128 [2024-07-12 11:24:09.142822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:43.128 [2024-07-12 11:24:09.142824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:43.128 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:43.128 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:19:43.128 11:24:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:43.128 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:43.128 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:43.386 11:24:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:43.386 11:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:43.386 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.386 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:43.386 [2024-07-12 11:24:09.275567] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:43.386 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.386 11:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:43.386 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:43.387 Malloc0 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:43.387 [2024-07-12 11:24:09.341105] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:43.387 [2024-07-12 11:24:09.349006] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:43.387 Malloc1 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=620507 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 620507 /var/tmp/bdevperf.sock 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 620507 ']' 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:43.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:43.387 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:43.645 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:43.645 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:19:43.645 11:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:43.645 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.645 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:43.904 NVMe0n1 00:19:43.904 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.904 11:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:19:43.904 11:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:43.904 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.904 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:43.904 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.904 1 00:19:43.904 11:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:43.904 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:19:43.904 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:43.904 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:43.904 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:43.904 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:43.904 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:43.904 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:43.904 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.904 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:43.904 request: 00:19:43.904 { 00:19:43.904 "name": "NVMe0", 00:19:43.904 "trtype": "tcp", 00:19:43.904 "traddr": "10.0.0.2", 00:19:43.904 "adrfam": "ipv4", 00:19:43.904 "trsvcid": "4420", 00:19:43.904 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.904 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:19:43.904 "hostaddr": "10.0.0.2", 00:19:43.904 "hostsvcid": "60000", 00:19:43.904 "prchk_reftag": false, 00:19:43.904 "prchk_guard": false, 00:19:43.904 "hdgst": false, 00:19:43.904 "ddgst": false, 00:19:43.904 "method": "bdev_nvme_attach_controller", 00:19:43.904 "req_id": 1 00:19:43.904 } 00:19:43.904 Got JSON-RPC error response 00:19:43.904 response: 00:19:43.904 { 00:19:43.904 "code": -114, 00:19:43.904 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:19:43.904 } 00:19:43.904 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:43.904 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:19:43.904 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:43.904 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:43.904 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:43.904 11:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:43.904 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:19:43.904 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:43.904 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:43.904 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:43.904 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:43.904 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:43.904 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:43.904 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.904 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:43.904 request: 00:19:43.904 { 00:19:43.904 "name": "NVMe0", 00:19:43.904 "trtype": "tcp", 00:19:43.904 "traddr": "10.0.0.2", 00:19:43.904 "adrfam": "ipv4", 00:19:43.904 "trsvcid": "4420", 00:19:43.904 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:43.904 "hostaddr": "10.0.0.2", 00:19:43.904 "hostsvcid": "60000", 00:19:43.904 "prchk_reftag": false, 00:19:43.904 "prchk_guard": false, 00:19:43.904 "hdgst": false, 00:19:43.904 "ddgst": false, 00:19:43.904 "method": "bdev_nvme_attach_controller", 00:19:43.904 "req_id": 1 00:19:43.904 } 00:19:43.904 Got JSON-RPC error response 00:19:43.904 response: 00:19:43.904 { 00:19:43.904 "code": -114, 00:19:43.904 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:19:43.904 } 00:19:43.904 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:43.904 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:19:43.904 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:43.904 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:43.904 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:43.904 11:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:43.904 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:19:43.904 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:43.904 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:43.904 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:43.905 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:43.905 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:43.905 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:43.905 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.905 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:43.905 request: 00:19:43.905 { 00:19:43.905 "name": "NVMe0", 00:19:43.905 "trtype": "tcp", 00:19:43.905 "traddr": "10.0.0.2", 00:19:43.905 "adrfam": "ipv4", 00:19:43.905 "trsvcid": "4420", 00:19:43.905 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.905 "hostaddr": "10.0.0.2", 00:19:43.905 "hostsvcid": "60000", 00:19:43.905 "prchk_reftag": false, 00:19:43.905 "prchk_guard": false, 00:19:43.905 "hdgst": false, 00:19:43.905 "ddgst": false, 00:19:43.905 "multipath": "disable", 00:19:43.905 "method": "bdev_nvme_attach_controller", 00:19:43.905 "req_id": 1 00:19:43.905 } 00:19:43.905 Got JSON-RPC error response 00:19:43.905 response: 00:19:43.905 { 00:19:43.905 "code": -114, 00:19:43.905 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:19:43.905 } 00:19:43.905 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:43.905 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:19:43.905 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:43.905 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:43.905 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:43.905 11:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:43.905 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:19:43.905 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:43.905 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:43.905 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:43.905 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:43.905 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:43.905 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:43.905 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.905 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:43.905 request: 00:19:43.905 { 00:19:43.905 "name": "NVMe0", 00:19:43.905 "trtype": "tcp", 00:19:43.905 "traddr": "10.0.0.2", 00:19:43.905 "adrfam": "ipv4", 00:19:43.905 "trsvcid": "4420", 00:19:43.905 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.905 "hostaddr": "10.0.0.2", 00:19:43.905 "hostsvcid": "60000", 00:19:43.905 "prchk_reftag": false, 00:19:43.905 "prchk_guard": false, 00:19:43.905 "hdgst": false, 00:19:43.905 "ddgst": false, 00:19:43.905 "multipath": "failover", 00:19:43.905 "method": "bdev_nvme_attach_controller", 00:19:43.905 "req_id": 1 00:19:43.905 } 00:19:43.905 Got JSON-RPC error response 00:19:43.905 response: 00:19:43.905 { 00:19:43.905 "code": -114, 00:19:43.905 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:19:43.905 } 00:19:43.905 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:43.905 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:19:43.905 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:43.905 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:43.905 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:43.905 11:24:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:43.905 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.905 11:24:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:44.163 00:19:44.163 11:24:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.163 11:24:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:44.163 11:24:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.163 11:24:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:44.163 11:24:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.163 11:24:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:44.163 11:24:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.163 11:24:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:44.421 00:19:44.421 11:24:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.421 11:24:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:44.421 11:24:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:19:44.421 11:24:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.421 11:24:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:44.421 11:24:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.421 11:24:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:19:44.421 11:24:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:45.355 0 00:19:45.355 11:24:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:19:45.355 11:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.355 11:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:45.614 11:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.614 11:24:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 620507 00:19:45.614 11:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 620507 ']' 00:19:45.614 11:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 620507 00:19:45.614 11:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:19:45.614 11:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:45.614 11:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 620507 00:19:45.614 11:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:45.614 11:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:45.614 11:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 620507' 00:19:45.614 killing process with pid 620507 00:19:45.614 11:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 620507 00:19:45.614 11:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 620507 00:19:45.872 11:24:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:45.872 11:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.872 11:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:45.872 11:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.872 11:24:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:45.872 11:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.872 11:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:45.872 11:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.872 11:24:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:19:45.872 11:24:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:45.872 11:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:19:45.872 11:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:19:45.872 11:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:19:45.872 11:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:19:45.872 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:19:45.872 [2024-07-12 11:24:09.448445] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:19:45.872 [2024-07-12 11:24:09.448529] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid620507 ] 00:19:45.872 EAL: No free 2048 kB hugepages reported on node 1 00:19:45.872 [2024-07-12 11:24:09.509597] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.872 [2024-07-12 11:24:09.619397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.872 [2024-07-12 11:24:10.340071] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 4e439121-a5d1-4296-a45c-d5d1f8c00f46 already exists 00:19:45.872 [2024-07-12 11:24:10.340116] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:4e439121-a5d1-4296-a45c-d5d1f8c00f46 alias for bdev NVMe1n1 00:19:45.872 [2024-07-12 11:24:10.340132] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:19:45.872 Running I/O for 1 seconds... 00:19:45.872 00:19:45.872 Latency(us) 00:19:45.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.872 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:19:45.872 NVMe0n1 : 1.00 19272.79 75.28 0.00 0.00 6630.37 5728.33 13301.38 00:19:45.872 =================================================================================================================== 00:19:45.872 Total : 19272.79 75.28 0.00 0.00 6630.37 5728.33 13301.38 00:19:45.872 Received shutdown signal, test time was about 1.000000 seconds 00:19:45.872 00:19:45.872 Latency(us) 00:19:45.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.872 =================================================================================================================== 00:19:45.872 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:45.872 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:19:45.872 11:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:45.872 11:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:19:45.872 11:24:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:19:45.872 11:24:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:45.872 11:24:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:19:45.872 11:24:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:45.872 11:24:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:19:45.872 11:24:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:45.872 11:24:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:45.872 rmmod nvme_tcp 00:19:45.872 rmmod nvme_fabrics 00:19:45.872 rmmod nvme_keyring 00:19:45.872 11:24:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:45.872 11:24:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:19:45.872 11:24:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:19:45.872 11:24:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 620343 ']' 00:19:45.872 11:24:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 620343 00:19:45.872 11:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 620343 ']' 00:19:45.872 11:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 620343 00:19:45.872 11:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:19:45.872 11:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:45.872 11:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 620343 00:19:45.872 11:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:45.872 11:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:45.872 11:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 620343' 00:19:45.872 killing process with pid 620343 00:19:45.872 11:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 620343 00:19:45.872 11:24:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 620343 00:19:46.131 11:24:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:46.131 11:24:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:46.131 11:24:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:46.131 11:24:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:46.131 11:24:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:46.131 11:24:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:46.131 11:24:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:46.131 11:24:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:48.689 11:24:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:48.689 00:19:48.689 real 0m7.478s 00:19:48.689 user 0m12.294s 00:19:48.689 sys 0m2.166s 00:19:48.689 11:24:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:48.689 11:24:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:48.689 ************************************ 00:19:48.689 END TEST nvmf_multicontroller 00:19:48.689 ************************************ 00:19:48.689 11:24:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:48.689 11:24:14 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:48.689 11:24:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:48.689 11:24:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:48.689 11:24:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:48.689 ************************************ 00:19:48.689 START TEST nvmf_aer 00:19:48.689 ************************************ 00:19:48.689 11:24:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:48.689 * Looking for test storage... 00:19:48.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:48.689 11:24:14 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:48.689 11:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:19:48.689 11:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:48.689 11:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:48.689 11:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:48.689 11:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:48.689 11:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:48.689 11:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:48.689 11:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:48.689 11:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:48.689 11:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:48.689 11:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:48.689 11:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:48.689 11:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:48.689 11:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:48.689 11:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:48.689 11:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:48.689 11:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:48.689 11:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:48.689 11:24:14 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:48.689 11:24:14 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:48.689 11:24:14 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:48.689 11:24:14 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.689 11:24:14 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.689 11:24:14 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.689 11:24:14 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:19:48.689 11:24:14 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.689 11:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:19:48.689 11:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:48.689 11:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:48.689 11:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:48.689 11:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:48.689 11:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:48.689 11:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:48.689 11:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:48.689 11:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:48.690 11:24:14 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:19:48.690 11:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:48.690 11:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:48.690 11:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:48.690 11:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:48.690 11:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:48.690 11:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:48.690 11:24:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:48.690 11:24:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:48.690 11:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:48.690 11:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:48.690 11:24:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:19:48.690 11:24:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:50.591 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:50.591 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:50.591 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:50.591 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:50.592 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:50.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:50.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:19:50.592 00:19:50.592 --- 10.0.0.2 ping statistics --- 00:19:50.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.592 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:50.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:50.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:19:50.592 00:19:50.592 --- 10.0.0.1 ping statistics --- 00:19:50.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.592 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=622720 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 622720 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 622720 ']' 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:50.592 11:24:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:50.592 [2024-07-12 11:24:16.615858] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:19:50.592 [2024-07-12 11:24:16.615967] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:50.592 EAL: No free 2048 kB hugepages reported on node 1 00:19:50.592 [2024-07-12 11:24:16.684436] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:50.851 [2024-07-12 11:24:16.798659] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:50.851 [2024-07-12 11:24:16.798715] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:50.851 [2024-07-12 11:24:16.798728] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:50.851 [2024-07-12 11:24:16.798738] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:50.851 [2024-07-12 11:24:16.798748] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:50.851 [2024-07-12 11:24:16.798828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:50.851 [2024-07-12 11:24:16.798897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:50.851 [2024-07-12 11:24:16.798935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:50.851 [2024-07-12 11:24:16.798941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.851 11:24:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:50.851 11:24:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:19:50.851 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:50.851 11:24:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:50.851 11:24:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:50.851 11:24:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:50.851 11:24:16 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:50.851 11:24:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.851 11:24:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:50.851 [2024-07-12 11:24:16.955739] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:50.851 11:24:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.851 11:24:16 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:19:50.851 11:24:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.851 11:24:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:51.110 Malloc0 00:19:51.110 11:24:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.110 11:24:16 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:19:51.110 11:24:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.110 11:24:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:51.110 11:24:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.110 11:24:16 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:51.110 11:24:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.110 11:24:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:51.110 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.110 11:24:17 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:51.110 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.110 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:51.110 [2024-07-12 11:24:17.009081] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:51.110 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.110 11:24:17 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:19:51.110 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.110 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:51.110 [ 00:19:51.110 { 00:19:51.110 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:51.110 "subtype": "Discovery", 00:19:51.110 "listen_addresses": [], 00:19:51.110 "allow_any_host": true, 00:19:51.110 "hosts": [] 00:19:51.110 }, 00:19:51.110 { 00:19:51.110 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.110 "subtype": "NVMe", 00:19:51.110 "listen_addresses": [ 00:19:51.110 { 00:19:51.110 "trtype": "TCP", 00:19:51.110 "adrfam": "IPv4", 00:19:51.110 "traddr": "10.0.0.2", 00:19:51.110 "trsvcid": "4420" 00:19:51.110 } 00:19:51.110 ], 00:19:51.110 "allow_any_host": true, 00:19:51.110 "hosts": [], 00:19:51.110 "serial_number": "SPDK00000000000001", 00:19:51.110 "model_number": "SPDK bdev Controller", 00:19:51.110 "max_namespaces": 2, 00:19:51.110 "min_cntlid": 1, 00:19:51.110 "max_cntlid": 65519, 00:19:51.110 "namespaces": [ 00:19:51.110 { 00:19:51.110 "nsid": 1, 00:19:51.110 "bdev_name": "Malloc0", 00:19:51.110 "name": "Malloc0", 00:19:51.110 "nguid": "2FEE42F3975C43E584B1C30E6AB2EEDD", 00:19:51.110 "uuid": "2fee42f3-975c-43e5-84b1-c30e6ab2eedd" 00:19:51.110 } 00:19:51.110 ] 00:19:51.110 } 00:19:51.110 ] 00:19:51.110 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.110 11:24:17 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:51.110 11:24:17 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:19:51.110 11:24:17 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=622859 00:19:51.110 11:24:17 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:19:51.110 11:24:17 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:19:51.110 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:19:51.110 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:51.110 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:19:51.110 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:19:51.110 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:19:51.110 EAL: No free 2048 kB hugepages reported on node 1 00:19:51.110 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:51.110 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:19:51.110 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:19:51.110 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:19:51.110 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:51.110 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:19:51.110 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:19:51.110 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:19:51.369 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:51.369 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:51.369 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:19:51.369 11:24:17 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:19:51.369 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.369 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:51.369 Malloc1 00:19:51.369 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.369 11:24:17 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:19:51.369 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.369 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:51.369 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.369 11:24:17 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:19:51.369 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.369 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:51.369 [ 00:19:51.369 { 00:19:51.369 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:51.369 "subtype": "Discovery", 00:19:51.369 "listen_addresses": [], 00:19:51.369 "allow_any_host": true, 00:19:51.369 "hosts": [] 00:19:51.369 }, 00:19:51.369 { 00:19:51.369 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.369 "subtype": "NVMe", 00:19:51.369 "listen_addresses": [ 00:19:51.369 { 00:19:51.369 "trtype": "TCP", 00:19:51.369 "adrfam": "IPv4", 00:19:51.369 "traddr": "10.0.0.2", 00:19:51.369 "trsvcid": "4420" 00:19:51.369 } 00:19:51.369 ], 00:19:51.369 "allow_any_host": true, 00:19:51.369 "hosts": [], 00:19:51.369 "serial_number": "SPDK00000000000001", 00:19:51.369 "model_number": "SPDK bdev Controller", 00:19:51.369 "max_namespaces": 2, 00:19:51.369 "min_cntlid": 1, 00:19:51.369 "max_cntlid": 65519, 00:19:51.369 "namespaces": [ 00:19:51.369 { 00:19:51.369 "nsid": 1, 00:19:51.369 "bdev_name": "Malloc0", 00:19:51.369 "name": "Malloc0", 00:19:51.369 "nguid": "2FEE42F3975C43E584B1C30E6AB2EEDD", 00:19:51.369 "uuid": "2fee42f3-975c-43e5-84b1-c30e6ab2eedd" 00:19:51.369 }, 00:19:51.369 { 00:19:51.369 "nsid": 2, 00:19:51.369 "bdev_name": "Malloc1", 00:19:51.369 "name": "Malloc1", 00:19:51.369 "nguid": "C109BAD8672B4FF4934700A187364CE4", 00:19:51.369 "uuid": "c109bad8-672b-4ff4-9347-00a187364ce4" 00:19:51.369 } 00:19:51.369 ] 00:19:51.369 } 00:19:51.369 ] 00:19:51.369 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.369 11:24:17 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 622859 00:19:51.369 Asynchronous Event Request test 00:19:51.369 Attaching to 10.0.0.2 00:19:51.369 Attached to 10.0.0.2 00:19:51.369 Registering asynchronous event callbacks... 00:19:51.369 Starting namespace attribute notice tests for all controllers... 00:19:51.369 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:51.369 aer_cb - Changed Namespace 00:19:51.369 Cleaning up... 00:19:51.369 11:24:17 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:51.369 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.369 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:51.369 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.369 11:24:17 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:19:51.369 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.369 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:51.369 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.369 11:24:17 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:51.369 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.369 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:51.369 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.369 11:24:17 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:19:51.369 11:24:17 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:19:51.369 11:24:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:51.369 11:24:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:19:51.369 11:24:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:51.369 11:24:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:19:51.369 11:24:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:51.369 11:24:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:51.369 rmmod nvme_tcp 00:19:51.628 rmmod nvme_fabrics 00:19:51.628 rmmod nvme_keyring 00:19:51.628 11:24:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:51.628 11:24:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:19:51.628 11:24:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:19:51.628 11:24:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 622720 ']' 00:19:51.628 11:24:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 622720 00:19:51.628 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 622720 ']' 00:19:51.628 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 622720 00:19:51.628 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:19:51.628 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:51.628 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 622720 00:19:51.628 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:51.628 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:51.628 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 622720' 00:19:51.628 killing process with pid 622720 00:19:51.628 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 622720 00:19:51.628 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 622720 00:19:51.887 11:24:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:51.887 11:24:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:51.887 11:24:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:51.887 11:24:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:51.887 11:24:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:51.887 11:24:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.887 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:51.887 11:24:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:53.792 11:24:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:53.792 00:19:53.792 real 0m5.508s 00:19:53.792 user 0m4.581s 00:19:53.792 sys 0m1.939s 00:19:53.793 11:24:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:53.793 11:24:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:53.793 ************************************ 00:19:53.793 END TEST nvmf_aer 00:19:53.793 ************************************ 00:19:53.793 11:24:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:53.793 11:24:19 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:53.793 11:24:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:53.793 11:24:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:53.793 11:24:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:53.793 ************************************ 00:19:53.793 START TEST nvmf_async_init 00:19:53.793 ************************************ 00:19:53.793 11:24:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:54.052 * Looking for test storage... 00:19:54.052 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:54.052 11:24:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:54.052 11:24:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:19:54.052 11:24:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:54.052 11:24:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:54.052 11:24:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:54.052 11:24:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:54.052 11:24:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:54.052 11:24:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:54.052 11:24:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:54.052 11:24:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:54.052 11:24:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:54.052 11:24:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:54.052 11:24:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:54.052 11:24:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:54.052 11:24:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:54.052 11:24:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:54.052 11:24:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:54.052 11:24:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:54.052 11:24:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:54.052 11:24:19 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:54.052 11:24:19 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:54.052 11:24:19 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:54.052 11:24:19 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.052 11:24:19 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.052 11:24:19 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.052 11:24:19 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:19:54.052 11:24:19 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.052 11:24:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:19:54.052 11:24:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:54.052 11:24:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:54.052 11:24:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:54.052 11:24:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:54.052 11:24:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:54.052 11:24:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:54.052 11:24:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:54.053 11:24:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:54.053 11:24:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:19:54.053 11:24:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:19:54.053 11:24:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:19:54.053 11:24:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:19:54.053 11:24:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:19:54.053 11:24:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:19:54.053 11:24:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=d14da7f09d29487d89a91b3aed4c1c3f 00:19:54.053 11:24:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:19:54.053 11:24:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:54.053 11:24:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:54.053 11:24:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:54.053 11:24:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:54.053 11:24:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:54.053 11:24:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.053 11:24:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:54.053 11:24:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.053 11:24:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:54.053 11:24:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:54.053 11:24:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:19:54.053 11:24:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:55.957 11:24:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:55.957 11:24:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:19:55.957 11:24:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:55.957 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:55.957 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:55.957 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:55.957 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:55.957 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:55.958 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:56.216 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:56.216 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:56.216 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:56.216 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:56.216 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:19:56.216 00:19:56.216 --- 10.0.0.2 ping statistics --- 00:19:56.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.216 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:19:56.216 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:56.216 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:56.216 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:19:56.216 00:19:56.216 --- 10.0.0.1 ping statistics --- 00:19:56.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.216 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:19:56.216 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:56.216 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:19:56.216 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:56.216 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:56.216 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:56.216 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:56.216 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:56.216 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:56.216 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:56.216 11:24:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:19:56.216 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:56.216 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:56.216 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:56.216 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=624802 00:19:56.216 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:56.216 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 624802 00:19:56.216 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 624802 ']' 00:19:56.216 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.216 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:56.216 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.216 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:56.216 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:56.216 [2024-07-12 11:24:22.210112] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:19:56.216 [2024-07-12 11:24:22.210210] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:56.216 EAL: No free 2048 kB hugepages reported on node 1 00:19:56.216 [2024-07-12 11:24:22.275148] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.474 [2024-07-12 11:24:22.375782] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:56.474 [2024-07-12 11:24:22.375833] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:56.474 [2024-07-12 11:24:22.375856] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:56.474 [2024-07-12 11:24:22.375875] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:56.474 [2024-07-12 11:24:22.375900] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:56.474 [2024-07-12 11:24:22.375931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.474 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:56.474 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:19:56.474 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:56.474 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:56.474 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:56.474 11:24:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:56.474 11:24:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:19:56.474 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.474 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:56.474 [2024-07-12 11:24:22.521586] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:56.474 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.474 11:24:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:19:56.474 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.474 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:56.474 null0 00:19:56.474 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.474 11:24:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:19:56.474 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.474 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:56.474 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.474 11:24:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:19:56.474 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.474 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:56.474 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.474 11:24:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g d14da7f09d29487d89a91b3aed4c1c3f 00:19:56.474 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.474 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:56.474 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.474 11:24:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:56.474 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.474 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:56.474 [2024-07-12 11:24:22.561834] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:56.474 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.474 11:24:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:19:56.474 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.474 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:56.732 nvme0n1 00:19:56.732 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.732 11:24:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:56.732 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.733 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:56.733 [ 00:19:56.733 { 00:19:56.733 "name": "nvme0n1", 00:19:56.733 "aliases": [ 00:19:56.733 "d14da7f0-9d29-487d-89a9-1b3aed4c1c3f" 00:19:56.733 ], 00:19:56.733 "product_name": "NVMe disk", 00:19:56.733 "block_size": 512, 00:19:56.733 "num_blocks": 2097152, 00:19:56.733 "uuid": "d14da7f0-9d29-487d-89a9-1b3aed4c1c3f", 00:19:56.733 "assigned_rate_limits": { 00:19:56.733 "rw_ios_per_sec": 0, 00:19:56.733 "rw_mbytes_per_sec": 0, 00:19:56.733 "r_mbytes_per_sec": 0, 00:19:56.733 "w_mbytes_per_sec": 0 00:19:56.733 }, 00:19:56.733 "claimed": false, 00:19:56.733 "zoned": false, 00:19:56.733 "supported_io_types": { 00:19:56.733 "read": true, 00:19:56.733 "write": true, 00:19:56.733 "unmap": false, 00:19:56.733 "flush": true, 00:19:56.733 "reset": true, 00:19:56.733 "nvme_admin": true, 00:19:56.733 "nvme_io": true, 00:19:56.733 "nvme_io_md": false, 00:19:56.733 "write_zeroes": true, 00:19:56.733 "zcopy": false, 00:19:56.733 "get_zone_info": false, 00:19:56.733 "zone_management": false, 00:19:56.733 "zone_append": false, 00:19:56.733 "compare": true, 00:19:56.733 "compare_and_write": true, 00:19:56.733 "abort": true, 00:19:56.733 "seek_hole": false, 00:19:56.733 "seek_data": false, 00:19:56.733 "copy": true, 00:19:56.733 "nvme_iov_md": false 00:19:56.733 }, 00:19:56.733 "memory_domains": [ 00:19:56.733 { 00:19:56.733 "dma_device_id": "system", 00:19:56.733 "dma_device_type": 1 00:19:56.733 } 00:19:56.733 ], 00:19:56.733 "driver_specific": { 00:19:56.733 "nvme": [ 00:19:56.733 { 00:19:56.733 "trid": { 00:19:56.733 "trtype": "TCP", 00:19:56.733 "adrfam": "IPv4", 00:19:56.733 "traddr": "10.0.0.2", 00:19:56.733 "trsvcid": "4420", 00:19:56.733 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:56.733 }, 00:19:56.733 "ctrlr_data": { 00:19:56.733 "cntlid": 1, 00:19:56.733 "vendor_id": "0x8086", 00:19:56.733 "model_number": "SPDK bdev Controller", 00:19:56.733 "serial_number": "00000000000000000000", 00:19:56.733 "firmware_revision": "24.09", 00:19:56.733 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:56.733 "oacs": { 00:19:56.733 "security": 0, 00:19:56.733 "format": 0, 00:19:56.733 "firmware": 0, 00:19:56.733 "ns_manage": 0 00:19:56.733 }, 00:19:56.733 "multi_ctrlr": true, 00:19:56.733 "ana_reporting": false 00:19:56.733 }, 00:19:56.733 "vs": { 00:19:56.733 "nvme_version": "1.3" 00:19:56.733 }, 00:19:56.733 "ns_data": { 00:19:56.733 "id": 1, 00:19:56.733 "can_share": true 00:19:56.733 } 00:19:56.733 } 00:19:56.733 ], 00:19:56.733 "mp_policy": "active_passive" 00:19:56.733 } 00:19:56.733 } 00:19:56.733 ] 00:19:56.733 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.733 11:24:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:19:56.733 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.733 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:56.733 [2024-07-12 11:24:22.814784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:56.733 [2024-07-12 11:24:22.814921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf3090 (9): Bad file descriptor 00:19:56.991 [2024-07-12 11:24:22.957005] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:56.991 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.991 11:24:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:56.991 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.991 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:56.991 [ 00:19:56.991 { 00:19:56.991 "name": "nvme0n1", 00:19:56.991 "aliases": [ 00:19:56.991 "d14da7f0-9d29-487d-89a9-1b3aed4c1c3f" 00:19:56.991 ], 00:19:56.991 "product_name": "NVMe disk", 00:19:56.991 "block_size": 512, 00:19:56.991 "num_blocks": 2097152, 00:19:56.991 "uuid": "d14da7f0-9d29-487d-89a9-1b3aed4c1c3f", 00:19:56.991 "assigned_rate_limits": { 00:19:56.991 "rw_ios_per_sec": 0, 00:19:56.991 "rw_mbytes_per_sec": 0, 00:19:56.991 "r_mbytes_per_sec": 0, 00:19:56.991 "w_mbytes_per_sec": 0 00:19:56.991 }, 00:19:56.991 "claimed": false, 00:19:56.991 "zoned": false, 00:19:56.991 "supported_io_types": { 00:19:56.991 "read": true, 00:19:56.991 "write": true, 00:19:56.991 "unmap": false, 00:19:56.991 "flush": true, 00:19:56.991 "reset": true, 00:19:56.991 "nvme_admin": true, 00:19:56.991 "nvme_io": true, 00:19:56.991 "nvme_io_md": false, 00:19:56.991 "write_zeroes": true, 00:19:56.991 "zcopy": false, 00:19:56.991 "get_zone_info": false, 00:19:56.991 "zone_management": false, 00:19:56.991 "zone_append": false, 00:19:56.991 "compare": true, 00:19:56.991 "compare_and_write": true, 00:19:56.991 "abort": true, 00:19:56.991 "seek_hole": false, 00:19:56.991 "seek_data": false, 00:19:56.991 "copy": true, 00:19:56.991 "nvme_iov_md": false 00:19:56.991 }, 00:19:56.991 "memory_domains": [ 00:19:56.991 { 00:19:56.991 "dma_device_id": "system", 00:19:56.991 "dma_device_type": 1 00:19:56.991 } 00:19:56.991 ], 00:19:56.991 "driver_specific": { 00:19:56.991 "nvme": [ 00:19:56.991 { 00:19:56.991 "trid": { 00:19:56.991 "trtype": "TCP", 00:19:56.991 "adrfam": "IPv4", 00:19:56.991 "traddr": "10.0.0.2", 00:19:56.992 "trsvcid": "4420", 00:19:56.992 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:56.992 }, 00:19:56.992 "ctrlr_data": { 00:19:56.992 "cntlid": 2, 00:19:56.992 "vendor_id": "0x8086", 00:19:56.992 "model_number": "SPDK bdev Controller", 00:19:56.992 "serial_number": "00000000000000000000", 00:19:56.992 "firmware_revision": "24.09", 00:19:56.992 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:56.992 "oacs": { 00:19:56.992 "security": 0, 00:19:56.992 "format": 0, 00:19:56.992 "firmware": 0, 00:19:56.992 "ns_manage": 0 00:19:56.992 }, 00:19:56.992 "multi_ctrlr": true, 00:19:56.992 "ana_reporting": false 00:19:56.992 }, 00:19:56.992 "vs": { 00:19:56.992 "nvme_version": "1.3" 00:19:56.992 }, 00:19:56.992 "ns_data": { 00:19:56.992 "id": 1, 00:19:56.992 "can_share": true 00:19:56.992 } 00:19:56.992 } 00:19:56.992 ], 00:19:56.992 "mp_policy": "active_passive" 00:19:56.992 } 00:19:56.992 } 00:19:56.992 ] 00:19:56.992 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.992 11:24:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.992 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.992 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:56.992 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.992 11:24:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:19:56.992 11:24:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.tshA2p2Z7P 00:19:56.992 11:24:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:56.992 11:24:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.tshA2p2Z7P 00:19:56.992 11:24:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:19:56.992 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.992 11:24:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:56.992 11:24:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.992 11:24:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:19:56.992 11:24:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.992 11:24:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:56.992 [2024-07-12 11:24:23.011519] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:56.992 [2024-07-12 11:24:23.011677] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:56.992 11:24:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.992 11:24:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tshA2p2Z7P 00:19:56.992 11:24:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.992 11:24:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:56.992 [2024-07-12 11:24:23.019532] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:56.992 11:24:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.992 11:24:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tshA2p2Z7P 00:19:56.992 11:24:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.992 11:24:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:56.992 [2024-07-12 11:24:23.027551] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:56.992 [2024-07-12 11:24:23.027609] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:56.992 nvme0n1 00:19:56.992 11:24:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.992 11:24:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:56.992 11:24:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.992 11:24:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:56.992 [ 00:19:56.992 { 00:19:56.992 "name": "nvme0n1", 00:19:56.992 "aliases": [ 00:19:56.992 "d14da7f0-9d29-487d-89a9-1b3aed4c1c3f" 00:19:56.992 ], 00:19:56.992 "product_name": "NVMe disk", 00:19:56.992 "block_size": 512, 00:19:56.992 "num_blocks": 2097152, 00:19:56.992 "uuid": "d14da7f0-9d29-487d-89a9-1b3aed4c1c3f", 00:19:56.992 "assigned_rate_limits": { 00:19:56.992 "rw_ios_per_sec": 0, 00:19:56.992 "rw_mbytes_per_sec": 0, 00:19:56.992 "r_mbytes_per_sec": 0, 00:19:56.992 "w_mbytes_per_sec": 0 00:19:56.992 }, 00:19:56.992 "claimed": false, 00:19:56.992 "zoned": false, 00:19:56.992 "supported_io_types": { 00:19:56.992 "read": true, 00:19:56.992 "write": true, 00:19:56.992 "unmap": false, 00:19:56.992 "flush": true, 00:19:56.992 "reset": true, 00:19:56.992 "nvme_admin": true, 00:19:56.992 "nvme_io": true, 00:19:56.992 "nvme_io_md": false, 00:19:56.992 "write_zeroes": true, 00:19:56.992 "zcopy": false, 00:19:56.992 "get_zone_info": false, 00:19:56.992 "zone_management": false, 00:19:56.992 "zone_append": false, 00:19:56.992 "compare": true, 00:19:56.992 "compare_and_write": true, 00:19:56.992 "abort": true, 00:19:56.992 "seek_hole": false, 00:19:56.992 "seek_data": false, 00:19:56.992 "copy": true, 00:19:56.992 "nvme_iov_md": false 00:19:56.992 }, 00:19:56.992 "memory_domains": [ 00:19:56.992 { 00:19:56.992 "dma_device_id": "system", 00:19:56.992 "dma_device_type": 1 00:19:56.992 } 00:19:56.992 ], 00:19:56.992 "driver_specific": { 00:19:56.992 "nvme": [ 00:19:56.992 { 00:19:56.992 "trid": { 00:19:56.992 "trtype": "TCP", 00:19:56.992 "adrfam": "IPv4", 00:19:56.992 "traddr": "10.0.0.2", 00:19:56.992 "trsvcid": "4421", 00:19:56.992 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:56.992 }, 00:19:56.992 "ctrlr_data": { 00:19:56.992 "cntlid": 3, 00:19:56.992 "vendor_id": "0x8086", 00:19:56.992 "model_number": "SPDK bdev Controller", 00:19:56.992 "serial_number": "00000000000000000000", 00:19:56.992 "firmware_revision": "24.09", 00:19:56.992 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:56.992 "oacs": { 00:19:56.992 "security": 0, 00:19:56.992 "format": 0, 00:19:56.992 "firmware": 0, 00:19:56.992 "ns_manage": 0 00:19:56.992 }, 00:19:56.992 "multi_ctrlr": true, 00:19:56.992 "ana_reporting": false 00:19:56.992 }, 00:19:56.992 "vs": { 00:19:56.992 "nvme_version": "1.3" 00:19:56.992 }, 00:19:56.992 "ns_data": { 00:19:56.992 "id": 1, 00:19:56.992 "can_share": true 00:19:56.992 } 00:19:56.992 } 00:19:56.992 ], 00:19:56.992 "mp_policy": "active_passive" 00:19:56.992 } 00:19:56.992 } 00:19:56.992 ] 00:19:56.992 11:24:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.992 11:24:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.992 11:24:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.992 11:24:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:57.270 11:24:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.270 11:24:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.tshA2p2Z7P 00:19:57.270 11:24:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:19:57.270 11:24:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:19:57.270 11:24:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:57.270 11:24:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:19:57.270 11:24:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:57.270 11:24:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:19:57.270 11:24:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:57.270 11:24:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:57.270 rmmod nvme_tcp 00:19:57.270 rmmod nvme_fabrics 00:19:57.270 rmmod nvme_keyring 00:19:57.270 11:24:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:57.270 11:24:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:19:57.270 11:24:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:19:57.270 11:24:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 624802 ']' 00:19:57.270 11:24:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 624802 00:19:57.270 11:24:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 624802 ']' 00:19:57.271 11:24:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 624802 00:19:57.271 11:24:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:19:57.271 11:24:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:57.271 11:24:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 624802 00:19:57.271 11:24:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:57.271 11:24:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:57.271 11:24:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 624802' 00:19:57.271 killing process with pid 624802 00:19:57.271 11:24:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 624802 00:19:57.271 [2024-07-12 11:24:23.211213] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:57.271 [2024-07-12 11:24:23.211252] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:57.271 11:24:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 624802 00:19:57.531 11:24:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:57.531 11:24:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:57.531 11:24:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:57.531 11:24:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:57.531 11:24:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:57.531 11:24:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.531 11:24:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:57.531 11:24:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.438 11:24:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:59.438 00:19:59.438 real 0m5.605s 00:19:59.438 user 0m2.148s 00:19:59.438 sys 0m1.848s 00:19:59.438 11:24:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:59.438 11:24:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:59.438 ************************************ 00:19:59.438 END TEST nvmf_async_init 00:19:59.438 ************************************ 00:19:59.438 11:24:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:59.438 11:24:25 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:19:59.438 11:24:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:59.438 11:24:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:59.438 11:24:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:59.438 ************************************ 00:19:59.438 START TEST dma 00:19:59.438 ************************************ 00:19:59.438 11:24:25 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:19:59.697 * Looking for test storage... 00:19:59.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:59.697 11:24:25 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:59.697 11:24:25 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:19:59.697 11:24:25 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:59.697 11:24:25 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:59.697 11:24:25 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:59.697 11:24:25 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:59.697 11:24:25 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:59.697 11:24:25 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:59.697 11:24:25 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:59.697 11:24:25 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:59.697 11:24:25 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:59.697 11:24:25 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:59.697 11:24:25 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:59.697 11:24:25 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:59.697 11:24:25 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:59.697 11:24:25 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:59.697 11:24:25 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:59.697 11:24:25 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:59.697 11:24:25 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:59.697 11:24:25 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:59.697 11:24:25 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:59.697 11:24:25 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:59.697 11:24:25 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.697 11:24:25 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.697 11:24:25 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.697 11:24:25 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:19:59.697 11:24:25 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.697 11:24:25 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:19:59.697 11:24:25 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:59.697 11:24:25 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:59.697 11:24:25 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:59.697 11:24:25 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:59.697 11:24:25 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:59.697 11:24:25 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:59.697 11:24:25 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:59.697 11:24:25 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:59.697 11:24:25 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:19:59.697 11:24:25 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:19:59.697 00:19:59.697 real 0m0.075s 00:19:59.697 user 0m0.042s 00:19:59.697 sys 0m0.038s 00:19:59.697 11:24:25 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:59.697 11:24:25 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:19:59.697 ************************************ 00:19:59.697 END TEST dma 00:19:59.697 ************************************ 00:19:59.697 11:24:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:59.697 11:24:25 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:59.697 11:24:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:59.697 11:24:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:59.697 11:24:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:59.697 ************************************ 00:19:59.697 START TEST nvmf_identify 00:19:59.697 ************************************ 00:19:59.697 11:24:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:59.697 * Looking for test storage... 00:19:59.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:59.697 11:24:25 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:59.697 11:24:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:19:59.697 11:24:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:59.697 11:24:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:59.697 11:24:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:59.697 11:24:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:59.697 11:24:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:59.697 11:24:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:59.697 11:24:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:59.697 11:24:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:59.697 11:24:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:59.697 11:24:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:59.697 11:24:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:59.698 11:24:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:59.698 11:24:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:59.698 11:24:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:59.698 11:24:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:59.698 11:24:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:59.698 11:24:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:59.698 11:24:25 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:59.698 11:24:25 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:59.698 11:24:25 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:59.698 11:24:25 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.698 11:24:25 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.698 11:24:25 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.698 11:24:25 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:19:59.698 11:24:25 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.698 11:24:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:19:59.698 11:24:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:59.698 11:24:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:59.698 11:24:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:59.698 11:24:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:59.698 11:24:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:59.698 11:24:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:59.698 11:24:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:59.698 11:24:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:59.698 11:24:25 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:59.698 11:24:25 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:59.698 11:24:25 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:19:59.698 11:24:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:59.698 11:24:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:59.698 11:24:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:59.698 11:24:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:59.698 11:24:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:59.698 11:24:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.698 11:24:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:59.698 11:24:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.698 11:24:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:59.698 11:24:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:59.698 11:24:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:19:59.698 11:24:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:01.595 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:01.595 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:20:01.595 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:01.595 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:01.595 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:01.596 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:01.596 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:01.596 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:01.596 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:01.596 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:01.853 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:01.854 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:01.854 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:01.854 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:01.854 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:01.854 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:01.854 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:01.854 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:01.854 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:20:01.854 00:20:01.854 --- 10.0.0.2 ping statistics --- 00:20:01.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.854 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:20:01.854 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:01.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:01.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:20:01.854 00:20:01.854 --- 10.0.0.1 ping statistics --- 00:20:01.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.854 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:20:01.854 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:01.854 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:20:01.854 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:01.854 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:01.854 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:01.854 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:01.854 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:01.854 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:01.854 11:24:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:01.854 11:24:27 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:01.854 11:24:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:01.854 11:24:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:01.854 11:24:27 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=626932 00:20:01.854 11:24:27 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:01.854 11:24:27 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:01.854 11:24:27 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 626932 00:20:01.854 11:24:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 626932 ']' 00:20:01.854 11:24:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.854 11:24:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:01.854 11:24:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.854 11:24:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:01.854 11:24:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:01.854 [2024-07-12 11:24:27.896600] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:20:01.854 [2024-07-12 11:24:27.896698] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:01.854 EAL: No free 2048 kB hugepages reported on node 1 00:20:01.854 [2024-07-12 11:24:27.960012] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:02.111 [2024-07-12 11:24:28.064967] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:02.111 [2024-07-12 11:24:28.065024] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:02.111 [2024-07-12 11:24:28.065048] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:02.111 [2024-07-12 11:24:28.065058] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:02.111 [2024-07-12 11:24:28.065068] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:02.111 [2024-07-12 11:24:28.065150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:02.111 [2024-07-12 11:24:28.065195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:02.111 [2024-07-12 11:24:28.065248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:02.111 [2024-07-12 11:24:28.065251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.111 11:24:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:02.111 11:24:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:20:02.111 11:24:28 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:02.111 11:24:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.111 11:24:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:02.111 [2024-07-12 11:24:28.186504] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:02.111 11:24:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.111 11:24:28 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:02.111 11:24:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:02.111 11:24:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:02.111 11:24:28 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:02.111 11:24:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.111 11:24:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:02.111 Malloc0 00:20:02.111 11:24:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.111 11:24:28 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:02.111 11:24:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.111 11:24:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:02.111 11:24:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.111 11:24:28 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:02.111 11:24:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.111 11:24:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:02.371 11:24:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.371 11:24:28 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:02.371 11:24:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.372 11:24:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:02.372 [2024-07-12 11:24:28.253479] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:02.372 11:24:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.372 11:24:28 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:02.372 11:24:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.372 11:24:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:02.372 11:24:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.372 11:24:28 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:02.372 11:24:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.372 11:24:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:02.372 [ 00:20:02.372 { 00:20:02.372 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:02.372 "subtype": "Discovery", 00:20:02.372 "listen_addresses": [ 00:20:02.372 { 00:20:02.372 "trtype": "TCP", 00:20:02.372 "adrfam": "IPv4", 00:20:02.372 "traddr": "10.0.0.2", 00:20:02.372 "trsvcid": "4420" 00:20:02.372 } 00:20:02.372 ], 00:20:02.372 "allow_any_host": true, 00:20:02.372 "hosts": [] 00:20:02.372 }, 00:20:02.372 { 00:20:02.372 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.372 "subtype": "NVMe", 00:20:02.372 "listen_addresses": [ 00:20:02.372 { 00:20:02.372 "trtype": "TCP", 00:20:02.372 "adrfam": "IPv4", 00:20:02.372 "traddr": "10.0.0.2", 00:20:02.372 "trsvcid": "4420" 00:20:02.372 } 00:20:02.372 ], 00:20:02.372 "allow_any_host": true, 00:20:02.372 "hosts": [], 00:20:02.372 "serial_number": "SPDK00000000000001", 00:20:02.372 "model_number": "SPDK bdev Controller", 00:20:02.372 "max_namespaces": 32, 00:20:02.372 "min_cntlid": 1, 00:20:02.372 "max_cntlid": 65519, 00:20:02.372 "namespaces": [ 00:20:02.372 { 00:20:02.372 "nsid": 1, 00:20:02.372 "bdev_name": "Malloc0", 00:20:02.372 "name": "Malloc0", 00:20:02.372 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:02.372 "eui64": "ABCDEF0123456789", 00:20:02.372 "uuid": "e1a35db6-8d0b-4aca-929c-7559eb6ed436" 00:20:02.372 } 00:20:02.372 ] 00:20:02.372 } 00:20:02.372 ] 00:20:02.372 11:24:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.372 11:24:28 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:02.372 [2024-07-12 11:24:28.291852] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:20:02.372 [2024-07-12 11:24:28.291911] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid626958 ] 00:20:02.372 EAL: No free 2048 kB hugepages reported on node 1 00:20:02.372 [2024-07-12 11:24:28.326424] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:02.372 [2024-07-12 11:24:28.326485] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:02.372 [2024-07-12 11:24:28.326495] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:02.372 [2024-07-12 11:24:28.326512] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:02.372 [2024-07-12 11:24:28.326522] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:02.372 [2024-07-12 11:24:28.326745] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:02.372 [2024-07-12 11:24:28.326795] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2142540 0 00:20:02.372 [2024-07-12 11:24:28.336891] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:02.372 [2024-07-12 11:24:28.336927] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:02.372 [2024-07-12 11:24:28.336936] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:02.372 [2024-07-12 11:24:28.336943] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:02.372 [2024-07-12 11:24:28.336993] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.372 [2024-07-12 11:24:28.337006] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.372 [2024-07-12 11:24:28.337014] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2142540) 00:20:02.372 [2024-07-12 11:24:28.337030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:02.372 [2024-07-12 11:24:28.337056] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a23c0, cid 0, qid 0 00:20:02.372 [2024-07-12 11:24:28.344880] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.372 [2024-07-12 11:24:28.344898] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.372 [2024-07-12 11:24:28.344906] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.372 [2024-07-12 11:24:28.344928] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a23c0) on tqpair=0x2142540 00:20:02.372 [2024-07-12 11:24:28.344945] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:02.372 [2024-07-12 11:24:28.344955] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:02.372 [2024-07-12 11:24:28.344965] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:02.372 [2024-07-12 11:24:28.344986] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.372 [2024-07-12 11:24:28.344995] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.372 [2024-07-12 11:24:28.345002] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2142540) 00:20:02.372 [2024-07-12 11:24:28.345014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.372 [2024-07-12 11:24:28.345038] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a23c0, cid 0, qid 0 00:20:02.372 [2024-07-12 11:24:28.345189] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.372 [2024-07-12 11:24:28.345204] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.372 [2024-07-12 11:24:28.345211] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.372 [2024-07-12 11:24:28.345218] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a23c0) on tqpair=0x2142540 00:20:02.372 [2024-07-12 11:24:28.345227] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:02.372 [2024-07-12 11:24:28.345240] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:02.372 [2024-07-12 11:24:28.345253] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.372 [2024-07-12 11:24:28.345261] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.372 [2024-07-12 11:24:28.345267] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2142540) 00:20:02.372 [2024-07-12 11:24:28.345278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.372 [2024-07-12 11:24:28.345299] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a23c0, cid 0, qid 0 00:20:02.372 [2024-07-12 11:24:28.345386] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.372 [2024-07-12 11:24:28.345400] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.372 [2024-07-12 11:24:28.345407] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.372 [2024-07-12 11:24:28.345414] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a23c0) on tqpair=0x2142540 00:20:02.372 [2024-07-12 11:24:28.345427] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:02.372 [2024-07-12 11:24:28.345442] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:02.372 [2024-07-12 11:24:28.345454] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.372 [2024-07-12 11:24:28.345461] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.372 [2024-07-12 11:24:28.345468] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2142540) 00:20:02.372 [2024-07-12 11:24:28.345479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.372 [2024-07-12 11:24:28.345499] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a23c0, cid 0, qid 0 00:20:02.372 [2024-07-12 11:24:28.345578] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.372 [2024-07-12 11:24:28.345590] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.372 [2024-07-12 11:24:28.345597] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.372 [2024-07-12 11:24:28.345604] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a23c0) on tqpair=0x2142540 00:20:02.372 [2024-07-12 11:24:28.345613] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:02.372 [2024-07-12 11:24:28.345629] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.372 [2024-07-12 11:24:28.345638] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.372 [2024-07-12 11:24:28.345645] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2142540) 00:20:02.372 [2024-07-12 11:24:28.345656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.372 [2024-07-12 11:24:28.345676] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a23c0, cid 0, qid 0 00:20:02.372 [2024-07-12 11:24:28.345751] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.372 [2024-07-12 11:24:28.345775] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.372 [2024-07-12 11:24:28.345782] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.372 [2024-07-12 11:24:28.345789] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a23c0) on tqpair=0x2142540 00:20:02.372 [2024-07-12 11:24:28.345798] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:02.372 [2024-07-12 11:24:28.345806] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:02.372 [2024-07-12 11:24:28.345819] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:02.372 [2024-07-12 11:24:28.345930] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:02.372 [2024-07-12 11:24:28.345940] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:02.372 [2024-07-12 11:24:28.345954] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.373 [2024-07-12 11:24:28.345961] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.373 [2024-07-12 11:24:28.345968] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2142540) 00:20:02.373 [2024-07-12 11:24:28.345978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.373 [2024-07-12 11:24:28.346000] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a23c0, cid 0, qid 0 00:20:02.373 [2024-07-12 11:24:28.346108] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.373 [2024-07-12 11:24:28.346126] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.373 [2024-07-12 11:24:28.346134] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.373 [2024-07-12 11:24:28.346141] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a23c0) on tqpair=0x2142540 00:20:02.373 [2024-07-12 11:24:28.346150] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:02.373 [2024-07-12 11:24:28.346166] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.373 [2024-07-12 11:24:28.346175] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.373 [2024-07-12 11:24:28.346182] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2142540) 00:20:02.373 [2024-07-12 11:24:28.346193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.373 [2024-07-12 11:24:28.346213] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a23c0, cid 0, qid 0 00:20:02.373 [2024-07-12 11:24:28.346292] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.373 [2024-07-12 11:24:28.346306] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.373 [2024-07-12 11:24:28.346313] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.373 [2024-07-12 11:24:28.346320] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a23c0) on tqpair=0x2142540 00:20:02.373 [2024-07-12 11:24:28.346329] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:02.373 [2024-07-12 11:24:28.346337] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:02.373 [2024-07-12 11:24:28.346350] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:02.373 [2024-07-12 11:24:28.346369] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:02.373 [2024-07-12 11:24:28.346385] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.373 [2024-07-12 11:24:28.346392] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2142540) 00:20:02.373 [2024-07-12 11:24:28.346403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.373 [2024-07-12 11:24:28.346434] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a23c0, cid 0, qid 0 00:20:02.373 [2024-07-12 11:24:28.346556] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:02.373 [2024-07-12 11:24:28.346571] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:02.373 [2024-07-12 11:24:28.346578] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:02.373 [2024-07-12 11:24:28.346585] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2142540): datao=0, datal=4096, cccid=0 00:20:02.373 [2024-07-12 11:24:28.346593] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21a23c0) on tqpair(0x2142540): expected_datao=0, payload_size=4096 00:20:02.373 [2024-07-12 11:24:28.346600] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.373 [2024-07-12 11:24:28.346618] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:02.373 [2024-07-12 11:24:28.346628] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:02.373 [2024-07-12 11:24:28.386940] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.373 [2024-07-12 11:24:28.386959] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.373 [2024-07-12 11:24:28.386967] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.373 [2024-07-12 11:24:28.386974] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a23c0) on tqpair=0x2142540 00:20:02.373 [2024-07-12 11:24:28.386987] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:02.373 [2024-07-12 11:24:28.387004] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:02.373 [2024-07-12 11:24:28.387014] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:02.373 [2024-07-12 11:24:28.387023] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:02.373 [2024-07-12 11:24:28.387031] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:02.373 [2024-07-12 11:24:28.387039] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:02.373 [2024-07-12 11:24:28.387054] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:02.373 [2024-07-12 11:24:28.387066] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.373 [2024-07-12 11:24:28.387074] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.373 [2024-07-12 11:24:28.387081] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2142540) 00:20:02.373 [2024-07-12 11:24:28.387092] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:02.373 [2024-07-12 11:24:28.387115] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a23c0, cid 0, qid 0 00:20:02.373 [2024-07-12 11:24:28.387207] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.373 [2024-07-12 11:24:28.387222] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.373 [2024-07-12 11:24:28.387229] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.373 [2024-07-12 11:24:28.387236] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a23c0) on tqpair=0x2142540 00:20:02.373 [2024-07-12 11:24:28.387248] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.373 [2024-07-12 11:24:28.387256] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.373 [2024-07-12 11:24:28.387263] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2142540) 00:20:02.373 [2024-07-12 11:24:28.387273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:02.373 [2024-07-12 11:24:28.387283] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.373 [2024-07-12 11:24:28.387290] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.373 [2024-07-12 11:24:28.387296] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2142540) 00:20:02.373 [2024-07-12 11:24:28.387305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:02.373 [2024-07-12 11:24:28.387315] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.373 [2024-07-12 11:24:28.387322] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.373 [2024-07-12 11:24:28.387328] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2142540) 00:20:02.373 [2024-07-12 11:24:28.387337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:02.373 [2024-07-12 11:24:28.387347] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.373 [2024-07-12 11:24:28.387354] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.373 [2024-07-12 11:24:28.387360] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2142540) 00:20:02.373 [2024-07-12 11:24:28.387369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:02.373 [2024-07-12 11:24:28.387377] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:02.373 [2024-07-12 11:24:28.387411] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:02.373 [2024-07-12 11:24:28.387427] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.373 [2024-07-12 11:24:28.387434] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2142540) 00:20:02.373 [2024-07-12 11:24:28.387445] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.373 [2024-07-12 11:24:28.387466] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a23c0, cid 0, qid 0 00:20:02.373 [2024-07-12 11:24:28.387492] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a2540, cid 1, qid 0 00:20:02.373 [2024-07-12 11:24:28.387500] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a26c0, cid 2, qid 0 00:20:02.373 [2024-07-12 11:24:28.387508] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a2840, cid 3, qid 0 00:20:02.373 [2024-07-12 11:24:28.387515] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a29c0, cid 4, qid 0 00:20:02.373 [2024-07-12 11:24:28.387711] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.373 [2024-07-12 11:24:28.387723] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.373 [2024-07-12 11:24:28.387730] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.373 [2024-07-12 11:24:28.387737] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a29c0) on tqpair=0x2142540 00:20:02.373 [2024-07-12 11:24:28.387746] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:02.373 [2024-07-12 11:24:28.387755] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:02.373 [2024-07-12 11:24:28.387773] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.373 [2024-07-12 11:24:28.387782] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2142540) 00:20:02.373 [2024-07-12 11:24:28.387793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.373 [2024-07-12 11:24:28.387814] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a29c0, cid 4, qid 0 00:20:02.373 [2024-07-12 11:24:28.387921] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:02.373 [2024-07-12 11:24:28.387935] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:02.373 [2024-07-12 11:24:28.387942] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:02.373 [2024-07-12 11:24:28.387949] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2142540): datao=0, datal=4096, cccid=4 00:20:02.373 [2024-07-12 11:24:28.387957] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21a29c0) on tqpair(0x2142540): expected_datao=0, payload_size=4096 00:20:02.373 [2024-07-12 11:24:28.387964] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.373 [2024-07-12 11:24:28.387974] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:02.373 [2024-07-12 11:24:28.387982] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:02.373 [2024-07-12 11:24:28.387993] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.374 [2024-07-12 11:24:28.388003] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.374 [2024-07-12 11:24:28.388009] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.374 [2024-07-12 11:24:28.388016] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a29c0) on tqpair=0x2142540 00:20:02.374 [2024-07-12 11:24:28.388034] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:02.374 [2024-07-12 11:24:28.388070] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.374 [2024-07-12 11:24:28.388081] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2142540) 00:20:02.374 [2024-07-12 11:24:28.388092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.374 [2024-07-12 11:24:28.388107] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.374 [2024-07-12 11:24:28.388115] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.374 [2024-07-12 11:24:28.388122] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2142540) 00:20:02.374 [2024-07-12 11:24:28.388131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:02.374 [2024-07-12 11:24:28.388157] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a29c0, cid 4, qid 0 00:20:02.374 [2024-07-12 11:24:28.388169] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a2b40, cid 5, qid 0 00:20:02.374 [2024-07-12 11:24:28.388312] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:02.374 [2024-07-12 11:24:28.388324] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:02.374 [2024-07-12 11:24:28.388331] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:02.374 [2024-07-12 11:24:28.388338] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2142540): datao=0, datal=1024, cccid=4 00:20:02.374 [2024-07-12 11:24:28.388346] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21a29c0) on tqpair(0x2142540): expected_datao=0, payload_size=1024 00:20:02.374 [2024-07-12 11:24:28.388353] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.374 [2024-07-12 11:24:28.388363] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:02.374 [2024-07-12 11:24:28.388370] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:02.374 [2024-07-12 11:24:28.388379] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.374 [2024-07-12 11:24:28.388388] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.374 [2024-07-12 11:24:28.388394] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.374 [2024-07-12 11:24:28.388401] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a2b40) on tqpair=0x2142540 00:20:02.374 [2024-07-12 11:24:28.432890] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.374 [2024-07-12 11:24:28.432907] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.374 [2024-07-12 11:24:28.432915] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.374 [2024-07-12 11:24:28.432922] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a29c0) on tqpair=0x2142540 00:20:02.374 [2024-07-12 11:24:28.432939] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.374 [2024-07-12 11:24:28.432949] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2142540) 00:20:02.374 [2024-07-12 11:24:28.432959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.374 [2024-07-12 11:24:28.432989] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a29c0, cid 4, qid 0 00:20:02.374 [2024-07-12 11:24:28.433130] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:02.374 [2024-07-12 11:24:28.433142] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:02.374 [2024-07-12 11:24:28.433150] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:02.374 [2024-07-12 11:24:28.433156] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2142540): datao=0, datal=3072, cccid=4 00:20:02.374 [2024-07-12 11:24:28.433164] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21a29c0) on tqpair(0x2142540): expected_datao=0, payload_size=3072 00:20:02.374 [2024-07-12 11:24:28.433172] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.374 [2024-07-12 11:24:28.433182] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:02.374 [2024-07-12 11:24:28.433189] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:02.374 [2024-07-12 11:24:28.433201] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.374 [2024-07-12 11:24:28.433210] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.374 [2024-07-12 11:24:28.433221] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.374 [2024-07-12 11:24:28.433229] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a29c0) on tqpair=0x2142540 00:20:02.374 [2024-07-12 11:24:28.433244] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.374 [2024-07-12 11:24:28.433253] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2142540) 00:20:02.374 [2024-07-12 11:24:28.433263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.374 [2024-07-12 11:24:28.433292] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a29c0, cid 4, qid 0 00:20:02.374 [2024-07-12 11:24:28.433387] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:02.374 [2024-07-12 11:24:28.433402] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:02.374 [2024-07-12 11:24:28.433409] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:02.374 [2024-07-12 11:24:28.433415] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2142540): datao=0, datal=8, cccid=4 00:20:02.374 [2024-07-12 11:24:28.433423] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21a29c0) on tqpair(0x2142540): expected_datao=0, payload_size=8 00:20:02.374 [2024-07-12 11:24:28.433431] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.374 [2024-07-12 11:24:28.433441] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:02.374 [2024-07-12 11:24:28.433449] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:02.374 [2024-07-12 11:24:28.473993] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.374 [2024-07-12 11:24:28.474012] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.374 [2024-07-12 11:24:28.474020] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.374 [2024-07-12 11:24:28.474027] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a29c0) on tqpair=0x2142540 00:20:02.374 ===================================================== 00:20:02.374 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:02.374 ===================================================== 00:20:02.374 Controller Capabilities/Features 00:20:02.374 ================================ 00:20:02.374 Vendor ID: 0000 00:20:02.374 Subsystem Vendor ID: 0000 00:20:02.374 Serial Number: .................... 00:20:02.374 Model Number: ........................................ 00:20:02.374 Firmware Version: 24.09 00:20:02.374 Recommended Arb Burst: 0 00:20:02.374 IEEE OUI Identifier: 00 00 00 00:20:02.374 Multi-path I/O 00:20:02.374 May have multiple subsystem ports: No 00:20:02.374 May have multiple controllers: No 00:20:02.374 Associated with SR-IOV VF: No 00:20:02.374 Max Data Transfer Size: 131072 00:20:02.374 Max Number of Namespaces: 0 00:20:02.374 Max Number of I/O Queues: 1024 00:20:02.374 NVMe Specification Version (VS): 1.3 00:20:02.374 NVMe Specification Version (Identify): 1.3 00:20:02.374 Maximum Queue Entries: 128 00:20:02.374 Contiguous Queues Required: Yes 00:20:02.374 Arbitration Mechanisms Supported 00:20:02.374 Weighted Round Robin: Not Supported 00:20:02.374 Vendor Specific: Not Supported 00:20:02.374 Reset Timeout: 15000 ms 00:20:02.374 Doorbell Stride: 4 bytes 00:20:02.374 NVM Subsystem Reset: Not Supported 00:20:02.374 Command Sets Supported 00:20:02.374 NVM Command Set: Supported 00:20:02.374 Boot Partition: Not Supported 00:20:02.374 Memory Page Size Minimum: 4096 bytes 00:20:02.374 Memory Page Size Maximum: 4096 bytes 00:20:02.374 Persistent Memory Region: Not Supported 00:20:02.374 Optional Asynchronous Events Supported 00:20:02.374 Namespace Attribute Notices: Not Supported 00:20:02.374 Firmware Activation Notices: Not Supported 00:20:02.374 ANA Change Notices: Not Supported 00:20:02.374 PLE Aggregate Log Change Notices: Not Supported 00:20:02.374 LBA Status Info Alert Notices: Not Supported 00:20:02.374 EGE Aggregate Log Change Notices: Not Supported 00:20:02.374 Normal NVM Subsystem Shutdown event: Not Supported 00:20:02.374 Zone Descriptor Change Notices: Not Supported 00:20:02.374 Discovery Log Change Notices: Supported 00:20:02.374 Controller Attributes 00:20:02.374 128-bit Host Identifier: Not Supported 00:20:02.374 Non-Operational Permissive Mode: Not Supported 00:20:02.374 NVM Sets: Not Supported 00:20:02.374 Read Recovery Levels: Not Supported 00:20:02.374 Endurance Groups: Not Supported 00:20:02.374 Predictable Latency Mode: Not Supported 00:20:02.374 Traffic Based Keep ALive: Not Supported 00:20:02.374 Namespace Granularity: Not Supported 00:20:02.374 SQ Associations: Not Supported 00:20:02.374 UUID List: Not Supported 00:20:02.374 Multi-Domain Subsystem: Not Supported 00:20:02.374 Fixed Capacity Management: Not Supported 00:20:02.374 Variable Capacity Management: Not Supported 00:20:02.374 Delete Endurance Group: Not Supported 00:20:02.374 Delete NVM Set: Not Supported 00:20:02.374 Extended LBA Formats Supported: Not Supported 00:20:02.374 Flexible Data Placement Supported: Not Supported 00:20:02.374 00:20:02.374 Controller Memory Buffer Support 00:20:02.374 ================================ 00:20:02.374 Supported: No 00:20:02.374 00:20:02.374 Persistent Memory Region Support 00:20:02.374 ================================ 00:20:02.374 Supported: No 00:20:02.374 00:20:02.374 Admin Command Set Attributes 00:20:02.374 ============================ 00:20:02.374 Security Send/Receive: Not Supported 00:20:02.374 Format NVM: Not Supported 00:20:02.374 Firmware Activate/Download: Not Supported 00:20:02.374 Namespace Management: Not Supported 00:20:02.374 Device Self-Test: Not Supported 00:20:02.374 Directives: Not Supported 00:20:02.374 NVMe-MI: Not Supported 00:20:02.374 Virtualization Management: Not Supported 00:20:02.374 Doorbell Buffer Config: Not Supported 00:20:02.374 Get LBA Status Capability: Not Supported 00:20:02.374 Command & Feature Lockdown Capability: Not Supported 00:20:02.374 Abort Command Limit: 1 00:20:02.375 Async Event Request Limit: 4 00:20:02.375 Number of Firmware Slots: N/A 00:20:02.375 Firmware Slot 1 Read-Only: N/A 00:20:02.375 Firmware Activation Without Reset: N/A 00:20:02.375 Multiple Update Detection Support: N/A 00:20:02.375 Firmware Update Granularity: No Information Provided 00:20:02.375 Per-Namespace SMART Log: No 00:20:02.375 Asymmetric Namespace Access Log Page: Not Supported 00:20:02.375 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:02.375 Command Effects Log Page: Not Supported 00:20:02.375 Get Log Page Extended Data: Supported 00:20:02.375 Telemetry Log Pages: Not Supported 00:20:02.375 Persistent Event Log Pages: Not Supported 00:20:02.375 Supported Log Pages Log Page: May Support 00:20:02.375 Commands Supported & Effects Log Page: Not Supported 00:20:02.375 Feature Identifiers & Effects Log Page:May Support 00:20:02.375 NVMe-MI Commands & Effects Log Page: May Support 00:20:02.375 Data Area 4 for Telemetry Log: Not Supported 00:20:02.375 Error Log Page Entries Supported: 128 00:20:02.375 Keep Alive: Not Supported 00:20:02.375 00:20:02.375 NVM Command Set Attributes 00:20:02.375 ========================== 00:20:02.375 Submission Queue Entry Size 00:20:02.375 Max: 1 00:20:02.375 Min: 1 00:20:02.375 Completion Queue Entry Size 00:20:02.375 Max: 1 00:20:02.375 Min: 1 00:20:02.375 Number of Namespaces: 0 00:20:02.375 Compare Command: Not Supported 00:20:02.375 Write Uncorrectable Command: Not Supported 00:20:02.375 Dataset Management Command: Not Supported 00:20:02.375 Write Zeroes Command: Not Supported 00:20:02.375 Set Features Save Field: Not Supported 00:20:02.375 Reservations: Not Supported 00:20:02.375 Timestamp: Not Supported 00:20:02.375 Copy: Not Supported 00:20:02.375 Volatile Write Cache: Not Present 00:20:02.375 Atomic Write Unit (Normal): 1 00:20:02.375 Atomic Write Unit (PFail): 1 00:20:02.375 Atomic Compare & Write Unit: 1 00:20:02.375 Fused Compare & Write: Supported 00:20:02.375 Scatter-Gather List 00:20:02.375 SGL Command Set: Supported 00:20:02.375 SGL Keyed: Supported 00:20:02.375 SGL Bit Bucket Descriptor: Not Supported 00:20:02.375 SGL Metadata Pointer: Not Supported 00:20:02.375 Oversized SGL: Not Supported 00:20:02.375 SGL Metadata Address: Not Supported 00:20:02.375 SGL Offset: Supported 00:20:02.375 Transport SGL Data Block: Not Supported 00:20:02.375 Replay Protected Memory Block: Not Supported 00:20:02.375 00:20:02.375 Firmware Slot Information 00:20:02.375 ========================= 00:20:02.375 Active slot: 0 00:20:02.375 00:20:02.375 00:20:02.375 Error Log 00:20:02.375 ========= 00:20:02.375 00:20:02.375 Active Namespaces 00:20:02.375 ================= 00:20:02.375 Discovery Log Page 00:20:02.375 ================== 00:20:02.375 Generation Counter: 2 00:20:02.375 Number of Records: 2 00:20:02.375 Record Format: 0 00:20:02.375 00:20:02.375 Discovery Log Entry 0 00:20:02.375 ---------------------- 00:20:02.375 Transport Type: 3 (TCP) 00:20:02.375 Address Family: 1 (IPv4) 00:20:02.375 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:02.375 Entry Flags: 00:20:02.375 Duplicate Returned Information: 1 00:20:02.375 Explicit Persistent Connection Support for Discovery: 1 00:20:02.375 Transport Requirements: 00:20:02.375 Secure Channel: Not Required 00:20:02.375 Port ID: 0 (0x0000) 00:20:02.375 Controller ID: 65535 (0xffff) 00:20:02.375 Admin Max SQ Size: 128 00:20:02.375 Transport Service Identifier: 4420 00:20:02.375 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:02.375 Transport Address: 10.0.0.2 00:20:02.375 Discovery Log Entry 1 00:20:02.375 ---------------------- 00:20:02.375 Transport Type: 3 (TCP) 00:20:02.375 Address Family: 1 (IPv4) 00:20:02.375 Subsystem Type: 2 (NVM Subsystem) 00:20:02.375 Entry Flags: 00:20:02.375 Duplicate Returned Information: 0 00:20:02.375 Explicit Persistent Connection Support for Discovery: 0 00:20:02.375 Transport Requirements: 00:20:02.375 Secure Channel: Not Required 00:20:02.375 Port ID: 0 (0x0000) 00:20:02.375 Controller ID: 65535 (0xffff) 00:20:02.375 Admin Max SQ Size: 128 00:20:02.375 Transport Service Identifier: 4420 00:20:02.375 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:02.375 Transport Address: 10.0.0.2 [2024-07-12 11:24:28.474141] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:02.375 [2024-07-12 11:24:28.474172] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a23c0) on tqpair=0x2142540 00:20:02.375 [2024-07-12 11:24:28.474184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.375 [2024-07-12 11:24:28.474194] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a2540) on tqpair=0x2142540 00:20:02.375 [2024-07-12 11:24:28.474202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.375 [2024-07-12 11:24:28.474211] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a26c0) on tqpair=0x2142540 00:20:02.375 [2024-07-12 11:24:28.474219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.375 [2024-07-12 11:24:28.474227] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a2840) on tqpair=0x2142540 00:20:02.375 [2024-07-12 11:24:28.474235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.375 [2024-07-12 11:24:28.474253] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.375 [2024-07-12 11:24:28.474263] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.375 [2024-07-12 11:24:28.474286] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2142540) 00:20:02.375 [2024-07-12 11:24:28.474297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.375 [2024-07-12 11:24:28.474322] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a2840, cid 3, qid 0 00:20:02.375 [2024-07-12 11:24:28.474447] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.375 [2024-07-12 11:24:28.474461] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.375 [2024-07-12 11:24:28.474473] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.375 [2024-07-12 11:24:28.474481] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a2840) on tqpair=0x2142540 00:20:02.375 [2024-07-12 11:24:28.474493] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.375 [2024-07-12 11:24:28.474501] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.375 [2024-07-12 11:24:28.474508] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2142540) 00:20:02.375 [2024-07-12 11:24:28.474519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.375 [2024-07-12 11:24:28.474546] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a2840, cid 3, qid 0 00:20:02.375 [2024-07-12 11:24:28.474637] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.375 [2024-07-12 11:24:28.474649] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.375 [2024-07-12 11:24:28.474656] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.375 [2024-07-12 11:24:28.474663] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a2840) on tqpair=0x2142540 00:20:02.375 [2024-07-12 11:24:28.474673] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:02.375 [2024-07-12 11:24:28.474681] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:02.375 [2024-07-12 11:24:28.474697] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.375 [2024-07-12 11:24:28.474706] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.375 [2024-07-12 11:24:28.474713] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2142540) 00:20:02.375 [2024-07-12 11:24:28.474724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.375 [2024-07-12 11:24:28.474745] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a2840, cid 3, qid 0 00:20:02.375 [2024-07-12 11:24:28.474822] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.375 [2024-07-12 11:24:28.474835] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.375 [2024-07-12 11:24:28.474842] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.375 [2024-07-12 11:24:28.474849] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a2840) on tqpair=0x2142540 00:20:02.375 [2024-07-12 11:24:28.474873] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.375 [2024-07-12 11:24:28.474885] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.375 [2024-07-12 11:24:28.474892] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2142540) 00:20:02.375 [2024-07-12 11:24:28.474903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.375 [2024-07-12 11:24:28.474925] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a2840, cid 3, qid 0 00:20:02.375 [2024-07-12 11:24:28.475004] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.375 [2024-07-12 11:24:28.475019] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.375 [2024-07-12 11:24:28.475026] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.375 [2024-07-12 11:24:28.475033] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a2840) on tqpair=0x2142540 00:20:02.375 [2024-07-12 11:24:28.475050] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.375 [2024-07-12 11:24:28.475060] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.376 [2024-07-12 11:24:28.475067] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2142540) 00:20:02.376 [2024-07-12 11:24:28.475078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.376 [2024-07-12 11:24:28.475099] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a2840, cid 3, qid 0 00:20:02.376 [2024-07-12 11:24:28.475178] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.376 [2024-07-12 11:24:28.475192] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.376 [2024-07-12 11:24:28.475199] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.376 [2024-07-12 11:24:28.475206] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a2840) on tqpair=0x2142540 00:20:02.376 [2024-07-12 11:24:28.475223] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.376 [2024-07-12 11:24:28.475233] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.376 [2024-07-12 11:24:28.475239] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2142540) 00:20:02.376 [2024-07-12 11:24:28.475250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.376 [2024-07-12 11:24:28.475271] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a2840, cid 3, qid 0 00:20:02.376 [2024-07-12 11:24:28.475350] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.376 [2024-07-12 11:24:28.475365] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.376 [2024-07-12 11:24:28.475372] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.376 [2024-07-12 11:24:28.475379] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a2840) on tqpair=0x2142540 00:20:02.376 [2024-07-12 11:24:28.475395] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.376 [2024-07-12 11:24:28.475405] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.376 [2024-07-12 11:24:28.475412] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2142540) 00:20:02.376 [2024-07-12 11:24:28.475423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.376 [2024-07-12 11:24:28.475444] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a2840, cid 3, qid 0 00:20:02.376 [2024-07-12 11:24:28.475527] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.376 [2024-07-12 11:24:28.475539] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.376 [2024-07-12 11:24:28.475547] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.376 [2024-07-12 11:24:28.475554] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a2840) on tqpair=0x2142540 00:20:02.376 [2024-07-12 11:24:28.475570] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.376 [2024-07-12 11:24:28.475580] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.376 [2024-07-12 11:24:28.475586] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2142540) 00:20:02.376 [2024-07-12 11:24:28.475597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.376 [2024-07-12 11:24:28.475618] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a2840, cid 3, qid 0 00:20:02.376 [2024-07-12 11:24:28.475690] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.376 [2024-07-12 11:24:28.475702] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.376 [2024-07-12 11:24:28.475709] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.376 [2024-07-12 11:24:28.475717] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a2840) on tqpair=0x2142540 00:20:02.376 [2024-07-12 11:24:28.475733] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.376 [2024-07-12 11:24:28.475742] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.376 [2024-07-12 11:24:28.475749] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2142540) 00:20:02.376 [2024-07-12 11:24:28.475760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.376 [2024-07-12 11:24:28.475781] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a2840, cid 3, qid 0 00:20:02.376 [2024-07-12 11:24:28.478876] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.376 [2024-07-12 11:24:28.478896] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.376 [2024-07-12 11:24:28.478905] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.376 [2024-07-12 11:24:28.478928] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a2840) on tqpair=0x2142540 00:20:02.376 [2024-07-12 11:24:28.478947] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.376 [2024-07-12 11:24:28.478957] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.376 [2024-07-12 11:24:28.478964] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2142540) 00:20:02.376 [2024-07-12 11:24:28.478975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.376 [2024-07-12 11:24:28.478997] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a2840, cid 3, qid 0 00:20:02.376 [2024-07-12 11:24:28.479132] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.376 [2024-07-12 11:24:28.479146] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.376 [2024-07-12 11:24:28.479153] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.376 [2024-07-12 11:24:28.479164] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a2840) on tqpair=0x2142540 00:20:02.376 [2024-07-12 11:24:28.479177] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:20:02.376 00:20:02.376 11:24:28 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:02.636 [2024-07-12 11:24:28.512420] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:20:02.636 [2024-07-12 11:24:28.512459] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid627083 ] 00:20:02.636 EAL: No free 2048 kB hugepages reported on node 1 00:20:02.636 [2024-07-12 11:24:28.545668] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:02.636 [2024-07-12 11:24:28.545720] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:02.636 [2024-07-12 11:24:28.545730] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:02.636 [2024-07-12 11:24:28.545745] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:02.636 [2024-07-12 11:24:28.545755] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:02.636 [2024-07-12 11:24:28.545962] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:02.636 [2024-07-12 11:24:28.546004] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x8b7540 0 00:20:02.636 [2024-07-12 11:24:28.556891] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:02.636 [2024-07-12 11:24:28.556911] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:02.636 [2024-07-12 11:24:28.556919] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:02.636 [2024-07-12 11:24:28.556926] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:02.636 [2024-07-12 11:24:28.556964] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.636 [2024-07-12 11:24:28.556976] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.636 [2024-07-12 11:24:28.556983] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8b7540) 00:20:02.636 [2024-07-12 11:24:28.556997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:02.636 [2024-07-12 11:24:28.557026] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9173c0, cid 0, qid 0 00:20:02.636 [2024-07-12 11:24:28.567881] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.636 [2024-07-12 11:24:28.567898] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.636 [2024-07-12 11:24:28.567905] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.636 [2024-07-12 11:24:28.567912] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9173c0) on tqpair=0x8b7540 00:20:02.636 [2024-07-12 11:24:28.567929] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:02.636 [2024-07-12 11:24:28.567940] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:02.636 [2024-07-12 11:24:28.567949] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:02.636 [2024-07-12 11:24:28.567966] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.636 [2024-07-12 11:24:28.567974] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.636 [2024-07-12 11:24:28.567981] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8b7540) 00:20:02.636 [2024-07-12 11:24:28.567992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.636 [2024-07-12 11:24:28.568014] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9173c0, cid 0, qid 0 00:20:02.636 [2024-07-12 11:24:28.568143] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.636 [2024-07-12 11:24:28.568158] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.636 [2024-07-12 11:24:28.568165] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.636 [2024-07-12 11:24:28.568172] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9173c0) on tqpair=0x8b7540 00:20:02.636 [2024-07-12 11:24:28.568180] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:02.636 [2024-07-12 11:24:28.568193] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:02.636 [2024-07-12 11:24:28.568206] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.636 [2024-07-12 11:24:28.568213] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.636 [2024-07-12 11:24:28.568220] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8b7540) 00:20:02.636 [2024-07-12 11:24:28.568230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.636 [2024-07-12 11:24:28.568252] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9173c0, cid 0, qid 0 00:20:02.636 [2024-07-12 11:24:28.568335] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.636 [2024-07-12 11:24:28.568349] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.636 [2024-07-12 11:24:28.568356] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.636 [2024-07-12 11:24:28.568363] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9173c0) on tqpair=0x8b7540 00:20:02.636 [2024-07-12 11:24:28.568371] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:02.636 [2024-07-12 11:24:28.568384] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:02.636 [2024-07-12 11:24:28.568396] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.636 [2024-07-12 11:24:28.568404] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.636 [2024-07-12 11:24:28.568410] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8b7540) 00:20:02.636 [2024-07-12 11:24:28.568421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.636 [2024-07-12 11:24:28.568441] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9173c0, cid 0, qid 0 00:20:02.636 [2024-07-12 11:24:28.568516] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.636 [2024-07-12 11:24:28.568528] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.636 [2024-07-12 11:24:28.568536] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.636 [2024-07-12 11:24:28.568543] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9173c0) on tqpair=0x8b7540 00:20:02.636 [2024-07-12 11:24:28.568551] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:02.636 [2024-07-12 11:24:28.568567] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.636 [2024-07-12 11:24:28.568577] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.568583] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8b7540) 00:20:02.637 [2024-07-12 11:24:28.568594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.637 [2024-07-12 11:24:28.568614] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9173c0, cid 0, qid 0 00:20:02.637 [2024-07-12 11:24:28.568690] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.637 [2024-07-12 11:24:28.568702] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.637 [2024-07-12 11:24:28.568709] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.568716] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9173c0) on tqpair=0x8b7540 00:20:02.637 [2024-07-12 11:24:28.568724] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:02.637 [2024-07-12 11:24:28.568732] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:02.637 [2024-07-12 11:24:28.568745] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:02.637 [2024-07-12 11:24:28.568854] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:02.637 [2024-07-12 11:24:28.568862] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:02.637 [2024-07-12 11:24:28.568882] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.568891] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.568897] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8b7540) 00:20:02.637 [2024-07-12 11:24:28.568908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.637 [2024-07-12 11:24:28.568929] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9173c0, cid 0, qid 0 00:20:02.637 [2024-07-12 11:24:28.569048] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.637 [2024-07-12 11:24:28.569062] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.637 [2024-07-12 11:24:28.569069] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.569076] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9173c0) on tqpair=0x8b7540 00:20:02.637 [2024-07-12 11:24:28.569084] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:02.637 [2024-07-12 11:24:28.569100] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.569110] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.569116] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8b7540) 00:20:02.637 [2024-07-12 11:24:28.569127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.637 [2024-07-12 11:24:28.569151] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9173c0, cid 0, qid 0 00:20:02.637 [2024-07-12 11:24:28.569224] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.637 [2024-07-12 11:24:28.569236] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.637 [2024-07-12 11:24:28.569243] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.569250] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9173c0) on tqpair=0x8b7540 00:20:02.637 [2024-07-12 11:24:28.569257] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:02.637 [2024-07-12 11:24:28.569266] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:02.637 [2024-07-12 11:24:28.569279] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:02.637 [2024-07-12 11:24:28.569296] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:02.637 [2024-07-12 11:24:28.569311] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.569319] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8b7540) 00:20:02.637 [2024-07-12 11:24:28.569329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.637 [2024-07-12 11:24:28.569350] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9173c0, cid 0, qid 0 00:20:02.637 [2024-07-12 11:24:28.569468] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:02.637 [2024-07-12 11:24:28.569482] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:02.637 [2024-07-12 11:24:28.569490] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.569496] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8b7540): datao=0, datal=4096, cccid=0 00:20:02.637 [2024-07-12 11:24:28.569504] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9173c0) on tqpair(0x8b7540): expected_datao=0, payload_size=4096 00:20:02.637 [2024-07-12 11:24:28.569512] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.569529] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.569538] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.569577] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.637 [2024-07-12 11:24:28.569590] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.637 [2024-07-12 11:24:28.569597] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.569604] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9173c0) on tqpair=0x8b7540 00:20:02.637 [2024-07-12 11:24:28.569615] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:02.637 [2024-07-12 11:24:28.569627] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:02.637 [2024-07-12 11:24:28.569635] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:02.637 [2024-07-12 11:24:28.569642] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:02.637 [2024-07-12 11:24:28.569650] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:02.637 [2024-07-12 11:24:28.569658] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:02.637 [2024-07-12 11:24:28.569673] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:02.637 [2024-07-12 11:24:28.569685] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.569695] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.569703] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8b7540) 00:20:02.637 [2024-07-12 11:24:28.569713] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:02.637 [2024-07-12 11:24:28.569735] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9173c0, cid 0, qid 0 00:20:02.637 [2024-07-12 11:24:28.569823] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.637 [2024-07-12 11:24:28.569836] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.637 [2024-07-12 11:24:28.569843] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.569850] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9173c0) on tqpair=0x8b7540 00:20:02.637 [2024-07-12 11:24:28.569860] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.569876] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.569883] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8b7540) 00:20:02.637 [2024-07-12 11:24:28.569893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:02.637 [2024-07-12 11:24:28.569903] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.569910] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.569916] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x8b7540) 00:20:02.637 [2024-07-12 11:24:28.569925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:02.637 [2024-07-12 11:24:28.569934] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.569941] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.569948] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x8b7540) 00:20:02.637 [2024-07-12 11:24:28.569956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:02.637 [2024-07-12 11:24:28.569966] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.569972] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.569979] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b7540) 00:20:02.637 [2024-07-12 11:24:28.569987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:02.637 [2024-07-12 11:24:28.569996] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:02.637 [2024-07-12 11:24:28.570015] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:02.637 [2024-07-12 11:24:28.570028] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.570035] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8b7540) 00:20:02.637 [2024-07-12 11:24:28.570045] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.637 [2024-07-12 11:24:28.570067] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9173c0, cid 0, qid 0 00:20:02.637 [2024-07-12 11:24:28.570078] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x917540, cid 1, qid 0 00:20:02.637 [2024-07-12 11:24:28.570086] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9176c0, cid 2, qid 0 00:20:02.637 [2024-07-12 11:24:28.570094] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x917840, cid 3, qid 0 00:20:02.637 [2024-07-12 11:24:28.570101] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9179c0, cid 4, qid 0 00:20:02.637 [2024-07-12 11:24:28.570258] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.637 [2024-07-12 11:24:28.570271] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.637 [2024-07-12 11:24:28.570278] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.570285] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9179c0) on tqpair=0x8b7540 00:20:02.637 [2024-07-12 11:24:28.570293] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:02.637 [2024-07-12 11:24:28.570302] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:02.637 [2024-07-12 11:24:28.570315] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:02.637 [2024-07-12 11:24:28.570326] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:02.637 [2024-07-12 11:24:28.570336] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.570344] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.570350] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8b7540) 00:20:02.637 [2024-07-12 11:24:28.570360] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:02.637 [2024-07-12 11:24:28.570381] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9179c0, cid 4, qid 0 00:20:02.637 [2024-07-12 11:24:28.570491] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.637 [2024-07-12 11:24:28.570505] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.637 [2024-07-12 11:24:28.570512] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.570519] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9179c0) on tqpair=0x8b7540 00:20:02.637 [2024-07-12 11:24:28.570583] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:02.637 [2024-07-12 11:24:28.570601] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:02.637 [2024-07-12 11:24:28.570615] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.570622] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8b7540) 00:20:02.637 [2024-07-12 11:24:28.570633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.637 [2024-07-12 11:24:28.570654] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9179c0, cid 4, qid 0 00:20:02.637 [2024-07-12 11:24:28.570749] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:02.637 [2024-07-12 11:24:28.570762] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:02.637 [2024-07-12 11:24:28.570769] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.570775] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8b7540): datao=0, datal=4096, cccid=4 00:20:02.637 [2024-07-12 11:24:28.570783] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9179c0) on tqpair(0x8b7540): expected_datao=0, payload_size=4096 00:20:02.637 [2024-07-12 11:24:28.570790] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.570806] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.570815] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.611974] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.637 [2024-07-12 11:24:28.611993] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.637 [2024-07-12 11:24:28.612004] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.612012] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9179c0) on tqpair=0x8b7540 00:20:02.637 [2024-07-12 11:24:28.612028] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:02.637 [2024-07-12 11:24:28.612045] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:02.637 [2024-07-12 11:24:28.612063] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:02.637 [2024-07-12 11:24:28.612077] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.612085] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8b7540) 00:20:02.637 [2024-07-12 11:24:28.612096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.637 [2024-07-12 11:24:28.612119] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9179c0, cid 4, qid 0 00:20:02.637 [2024-07-12 11:24:28.612248] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:02.637 [2024-07-12 11:24:28.612262] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:02.637 [2024-07-12 11:24:28.612269] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.612276] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8b7540): datao=0, datal=4096, cccid=4 00:20:02.637 [2024-07-12 11:24:28.612284] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9179c0) on tqpair(0x8b7540): expected_datao=0, payload_size=4096 00:20:02.637 [2024-07-12 11:24:28.612291] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.612302] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.612309] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.612320] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.637 [2024-07-12 11:24:28.612330] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.637 [2024-07-12 11:24:28.612337] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.612343] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9179c0) on tqpair=0x8b7540 00:20:02.637 [2024-07-12 11:24:28.612363] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:02.637 [2024-07-12 11:24:28.612381] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:02.637 [2024-07-12 11:24:28.612395] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.612403] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8b7540) 00:20:02.637 [2024-07-12 11:24:28.612414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.637 [2024-07-12 11:24:28.612435] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9179c0, cid 4, qid 0 00:20:02.637 [2024-07-12 11:24:28.612528] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:02.637 [2024-07-12 11:24:28.612543] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:02.637 [2024-07-12 11:24:28.612550] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.612556] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8b7540): datao=0, datal=4096, cccid=4 00:20:02.637 [2024-07-12 11:24:28.612564] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9179c0) on tqpair(0x8b7540): expected_datao=0, payload_size=4096 00:20:02.637 [2024-07-12 11:24:28.612571] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.612588] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.612601] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.652980] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.637 [2024-07-12 11:24:28.652999] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.637 [2024-07-12 11:24:28.653007] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.637 [2024-07-12 11:24:28.653014] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9179c0) on tqpair=0x8b7540 00:20:02.637 [2024-07-12 11:24:28.653028] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:02.637 [2024-07-12 11:24:28.653043] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:02.637 [2024-07-12 11:24:28.653060] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:02.637 [2024-07-12 11:24:28.653071] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:20:02.637 [2024-07-12 11:24:28.653079] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:02.638 [2024-07-12 11:24:28.653088] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:02.638 [2024-07-12 11:24:28.653097] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:02.638 [2024-07-12 11:24:28.653104] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:02.638 [2024-07-12 11:24:28.653113] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:02.638 [2024-07-12 11:24:28.653132] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.638 [2024-07-12 11:24:28.653141] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8b7540) 00:20:02.638 [2024-07-12 11:24:28.653152] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.638 [2024-07-12 11:24:28.653164] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.638 [2024-07-12 11:24:28.653171] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.638 [2024-07-12 11:24:28.653177] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8b7540) 00:20:02.638 [2024-07-12 11:24:28.653187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:02.638 [2024-07-12 11:24:28.653213] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9179c0, cid 4, qid 0 00:20:02.638 [2024-07-12 11:24:28.653225] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x917b40, cid 5, qid 0 00:20:02.638 [2024-07-12 11:24:28.653310] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.638 [2024-07-12 11:24:28.653322] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.638 [2024-07-12 11:24:28.653329] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.638 [2024-07-12 11:24:28.653336] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9179c0) on tqpair=0x8b7540 00:20:02.638 [2024-07-12 11:24:28.653346] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.638 [2024-07-12 11:24:28.653355] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.638 [2024-07-12 11:24:28.653362] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.638 [2024-07-12 11:24:28.653368] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x917b40) on tqpair=0x8b7540 00:20:02.638 [2024-07-12 11:24:28.653384] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.638 [2024-07-12 11:24:28.653393] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8b7540) 00:20:02.638 [2024-07-12 11:24:28.653407] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.638 [2024-07-12 11:24:28.653429] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x917b40, cid 5, qid 0 00:20:02.638 [2024-07-12 11:24:28.653512] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.638 [2024-07-12 11:24:28.653524] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.638 [2024-07-12 11:24:28.653531] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.638 [2024-07-12 11:24:28.653538] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x917b40) on tqpair=0x8b7540 00:20:02.638 [2024-07-12 11:24:28.653554] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.638 [2024-07-12 11:24:28.653562] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8b7540) 00:20:02.638 [2024-07-12 11:24:28.653573] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.638 [2024-07-12 11:24:28.653593] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x917b40, cid 5, qid 0 00:20:02.638 [2024-07-12 11:24:28.653685] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.638 [2024-07-12 11:24:28.653697] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.638 [2024-07-12 11:24:28.653704] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.638 [2024-07-12 11:24:28.653711] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x917b40) on tqpair=0x8b7540 00:20:02.638 [2024-07-12 11:24:28.653726] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.638 [2024-07-12 11:24:28.653735] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8b7540) 00:20:02.638 [2024-07-12 11:24:28.653745] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.638 [2024-07-12 11:24:28.653765] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x917b40, cid 5, qid 0 00:20:02.638 [2024-07-12 11:24:28.653838] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.638 [2024-07-12 11:24:28.653850] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.638 [2024-07-12 11:24:28.653857] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.638 [2024-07-12 11:24:28.653864] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x917b40) on tqpair=0x8b7540 00:20:02.638 [2024-07-12 11:24:28.653895] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.638 [2024-07-12 11:24:28.653906] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8b7540) 00:20:02.638 [2024-07-12 11:24:28.653917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.638 [2024-07-12 11:24:28.653929] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.638 [2024-07-12 11:24:28.653937] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8b7540) 00:20:02.638 [2024-07-12 11:24:28.653946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.638 [2024-07-12 11:24:28.653958] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.638 [2024-07-12 11:24:28.653965] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x8b7540) 00:20:02.638 [2024-07-12 11:24:28.653974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.638 [2024-07-12 11:24:28.653986] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.638 [2024-07-12 11:24:28.653994] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x8b7540) 00:20:02.638 [2024-07-12 11:24:28.654003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.638 [2024-07-12 11:24:28.654029] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x917b40, cid 5, qid 0 00:20:02.638 [2024-07-12 11:24:28.654041] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9179c0, cid 4, qid 0 00:20:02.638 [2024-07-12 11:24:28.654049] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x917cc0, cid 6, qid 0 00:20:02.638 [2024-07-12 11:24:28.654056] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x917e40, cid 7, qid 0 00:20:02.638 [2024-07-12 11:24:28.654272] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:02.638 [2024-07-12 11:24:28.654287] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:02.638 [2024-07-12 11:24:28.654294] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:02.638 [2024-07-12 11:24:28.654301] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8b7540): datao=0, datal=8192, cccid=5 00:20:02.638 [2024-07-12 11:24:28.654309] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x917b40) on tqpair(0x8b7540): expected_datao=0, payload_size=8192 00:20:02.638 [2024-07-12 11:24:28.654317] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.638 [2024-07-12 11:24:28.654327] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:02.638 [2024-07-12 11:24:28.654334] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:02.638 [2024-07-12 11:24:28.654343] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:02.638 [2024-07-12 11:24:28.654352] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:02.638 [2024-07-12 11:24:28.654358] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:02.638 [2024-07-12 11:24:28.654365] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8b7540): datao=0, datal=512, cccid=4 00:20:02.638 [2024-07-12 11:24:28.654372] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9179c0) on tqpair(0x8b7540): expected_datao=0, payload_size=512 00:20:02.638 [2024-07-12 11:24:28.654380] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.638 [2024-07-12 11:24:28.654389] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:02.638 [2024-07-12 11:24:28.654396] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:02.638 [2024-07-12 11:24:28.654404] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:02.638 [2024-07-12 11:24:28.654413] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:02.638 [2024-07-12 11:24:28.654420] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:02.638 [2024-07-12 11:24:28.654426] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8b7540): datao=0, datal=512, cccid=6 00:20:02.638 [2024-07-12 11:24:28.654433] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x917cc0) on tqpair(0x8b7540): expected_datao=0, payload_size=512 00:20:02.638 [2024-07-12 11:24:28.654441] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.638 [2024-07-12 11:24:28.654450] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:02.638 [2024-07-12 11:24:28.654457] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:02.638 [2024-07-12 11:24:28.654466] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:02.638 [2024-07-12 11:24:28.654474] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:02.638 [2024-07-12 11:24:28.654481] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:02.638 [2024-07-12 11:24:28.654487] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8b7540): datao=0, datal=4096, cccid=7 00:20:02.638 [2024-07-12 11:24:28.654495] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x917e40) on tqpair(0x8b7540): expected_datao=0, payload_size=4096 00:20:02.638 [2024-07-12 11:24:28.654502] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.638 [2024-07-12 11:24:28.654512] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:02.638 [2024-07-12 11:24:28.654519] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:02.638 [2024-07-12 11:24:28.654531] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.638 [2024-07-12 11:24:28.654558] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.638 [2024-07-12 11:24:28.654566] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.638 [2024-07-12 11:24:28.654572] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x917b40) on tqpair=0x8b7540 00:20:02.638 [2024-07-12 11:24:28.654591] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.638 [2024-07-12 11:24:28.654601] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.638 [2024-07-12 11:24:28.654607] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.638 [2024-07-12 11:24:28.654614] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9179c0) on tqpair=0x8b7540 00:20:02.638 [2024-07-12 11:24:28.654629] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.638 [2024-07-12 11:24:28.654638] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.638 [2024-07-12 11:24:28.654645] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.638 [2024-07-12 11:24:28.654651] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x917cc0) on tqpair=0x8b7540 00:20:02.638 [2024-07-12 11:24:28.654661] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.638 [2024-07-12 11:24:28.654671] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.638 [2024-07-12 11:24:28.654677] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.638 [2024-07-12 11:24:28.654683] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x917e40) on tqpair=0x8b7540 00:20:02.638 ===================================================== 00:20:02.638 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:02.638 ===================================================== 00:20:02.638 Controller Capabilities/Features 00:20:02.638 ================================ 00:20:02.638 Vendor ID: 8086 00:20:02.638 Subsystem Vendor ID: 8086 00:20:02.638 Serial Number: SPDK00000000000001 00:20:02.638 Model Number: SPDK bdev Controller 00:20:02.638 Firmware Version: 24.09 00:20:02.638 Recommended Arb Burst: 6 00:20:02.638 IEEE OUI Identifier: e4 d2 5c 00:20:02.638 Multi-path I/O 00:20:02.638 May have multiple subsystem ports: Yes 00:20:02.638 May have multiple controllers: Yes 00:20:02.638 Associated with SR-IOV VF: No 00:20:02.638 Max Data Transfer Size: 131072 00:20:02.638 Max Number of Namespaces: 32 00:20:02.638 Max Number of I/O Queues: 127 00:20:02.638 NVMe Specification Version (VS): 1.3 00:20:02.638 NVMe Specification Version (Identify): 1.3 00:20:02.638 Maximum Queue Entries: 128 00:20:02.638 Contiguous Queues Required: Yes 00:20:02.638 Arbitration Mechanisms Supported 00:20:02.638 Weighted Round Robin: Not Supported 00:20:02.638 Vendor Specific: Not Supported 00:20:02.638 Reset Timeout: 15000 ms 00:20:02.638 Doorbell Stride: 4 bytes 00:20:02.638 NVM Subsystem Reset: Not Supported 00:20:02.638 Command Sets Supported 00:20:02.638 NVM Command Set: Supported 00:20:02.638 Boot Partition: Not Supported 00:20:02.638 Memory Page Size Minimum: 4096 bytes 00:20:02.638 Memory Page Size Maximum: 4096 bytes 00:20:02.638 Persistent Memory Region: Not Supported 00:20:02.638 Optional Asynchronous Events Supported 00:20:02.638 Namespace Attribute Notices: Supported 00:20:02.638 Firmware Activation Notices: Not Supported 00:20:02.638 ANA Change Notices: Not Supported 00:20:02.638 PLE Aggregate Log Change Notices: Not Supported 00:20:02.638 LBA Status Info Alert Notices: Not Supported 00:20:02.638 EGE Aggregate Log Change Notices: Not Supported 00:20:02.638 Normal NVM Subsystem Shutdown event: Not Supported 00:20:02.638 Zone Descriptor Change Notices: Not Supported 00:20:02.638 Discovery Log Change Notices: Not Supported 00:20:02.638 Controller Attributes 00:20:02.638 128-bit Host Identifier: Supported 00:20:02.638 Non-Operational Permissive Mode: Not Supported 00:20:02.638 NVM Sets: Not Supported 00:20:02.638 Read Recovery Levels: Not Supported 00:20:02.638 Endurance Groups: Not Supported 00:20:02.638 Predictable Latency Mode: Not Supported 00:20:02.638 Traffic Based Keep ALive: Not Supported 00:20:02.638 Namespace Granularity: Not Supported 00:20:02.638 SQ Associations: Not Supported 00:20:02.638 UUID List: Not Supported 00:20:02.638 Multi-Domain Subsystem: Not Supported 00:20:02.638 Fixed Capacity Management: Not Supported 00:20:02.638 Variable Capacity Management: Not Supported 00:20:02.638 Delete Endurance Group: Not Supported 00:20:02.638 Delete NVM Set: Not Supported 00:20:02.638 Extended LBA Formats Supported: Not Supported 00:20:02.638 Flexible Data Placement Supported: Not Supported 00:20:02.638 00:20:02.638 Controller Memory Buffer Support 00:20:02.638 ================================ 00:20:02.638 Supported: No 00:20:02.638 00:20:02.638 Persistent Memory Region Support 00:20:02.638 ================================ 00:20:02.638 Supported: No 00:20:02.638 00:20:02.638 Admin Command Set Attributes 00:20:02.638 ============================ 00:20:02.638 Security Send/Receive: Not Supported 00:20:02.638 Format NVM: Not Supported 00:20:02.638 Firmware Activate/Download: Not Supported 00:20:02.638 Namespace Management: Not Supported 00:20:02.638 Device Self-Test: Not Supported 00:20:02.638 Directives: Not Supported 00:20:02.638 NVMe-MI: Not Supported 00:20:02.638 Virtualization Management: Not Supported 00:20:02.638 Doorbell Buffer Config: Not Supported 00:20:02.638 Get LBA Status Capability: Not Supported 00:20:02.638 Command & Feature Lockdown Capability: Not Supported 00:20:02.638 Abort Command Limit: 4 00:20:02.638 Async Event Request Limit: 4 00:20:02.638 Number of Firmware Slots: N/A 00:20:02.638 Firmware Slot 1 Read-Only: N/A 00:20:02.638 Firmware Activation Without Reset: N/A 00:20:02.638 Multiple Update Detection Support: N/A 00:20:02.638 Firmware Update Granularity: No Information Provided 00:20:02.638 Per-Namespace SMART Log: No 00:20:02.638 Asymmetric Namespace Access Log Page: Not Supported 00:20:02.638 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:02.638 Command Effects Log Page: Supported 00:20:02.638 Get Log Page Extended Data: Supported 00:20:02.638 Telemetry Log Pages: Not Supported 00:20:02.638 Persistent Event Log Pages: Not Supported 00:20:02.638 Supported Log Pages Log Page: May Support 00:20:02.638 Commands Supported & Effects Log Page: Not Supported 00:20:02.638 Feature Identifiers & Effects Log Page:May Support 00:20:02.638 NVMe-MI Commands & Effects Log Page: May Support 00:20:02.638 Data Area 4 for Telemetry Log: Not Supported 00:20:02.638 Error Log Page Entries Supported: 128 00:20:02.638 Keep Alive: Supported 00:20:02.638 Keep Alive Granularity: 10000 ms 00:20:02.638 00:20:02.638 NVM Command Set Attributes 00:20:02.638 ========================== 00:20:02.638 Submission Queue Entry Size 00:20:02.638 Max: 64 00:20:02.638 Min: 64 00:20:02.638 Completion Queue Entry Size 00:20:02.638 Max: 16 00:20:02.638 Min: 16 00:20:02.638 Number of Namespaces: 32 00:20:02.638 Compare Command: Supported 00:20:02.638 Write Uncorrectable Command: Not Supported 00:20:02.638 Dataset Management Command: Supported 00:20:02.638 Write Zeroes Command: Supported 00:20:02.638 Set Features Save Field: Not Supported 00:20:02.638 Reservations: Supported 00:20:02.638 Timestamp: Not Supported 00:20:02.638 Copy: Supported 00:20:02.638 Volatile Write Cache: Present 00:20:02.638 Atomic Write Unit (Normal): 1 00:20:02.639 Atomic Write Unit (PFail): 1 00:20:02.639 Atomic Compare & Write Unit: 1 00:20:02.639 Fused Compare & Write: Supported 00:20:02.639 Scatter-Gather List 00:20:02.639 SGL Command Set: Supported 00:20:02.639 SGL Keyed: Supported 00:20:02.639 SGL Bit Bucket Descriptor: Not Supported 00:20:02.639 SGL Metadata Pointer: Not Supported 00:20:02.639 Oversized SGL: Not Supported 00:20:02.639 SGL Metadata Address: Not Supported 00:20:02.639 SGL Offset: Supported 00:20:02.639 Transport SGL Data Block: Not Supported 00:20:02.639 Replay Protected Memory Block: Not Supported 00:20:02.639 00:20:02.639 Firmware Slot Information 00:20:02.639 ========================= 00:20:02.639 Active slot: 1 00:20:02.639 Slot 1 Firmware Revision: 24.09 00:20:02.639 00:20:02.639 00:20:02.639 Commands Supported and Effects 00:20:02.639 ============================== 00:20:02.639 Admin Commands 00:20:02.639 -------------- 00:20:02.639 Get Log Page (02h): Supported 00:20:02.639 Identify (06h): Supported 00:20:02.639 Abort (08h): Supported 00:20:02.639 Set Features (09h): Supported 00:20:02.639 Get Features (0Ah): Supported 00:20:02.639 Asynchronous Event Request (0Ch): Supported 00:20:02.639 Keep Alive (18h): Supported 00:20:02.639 I/O Commands 00:20:02.639 ------------ 00:20:02.639 Flush (00h): Supported LBA-Change 00:20:02.639 Write (01h): Supported LBA-Change 00:20:02.639 Read (02h): Supported 00:20:02.639 Compare (05h): Supported 00:20:02.639 Write Zeroes (08h): Supported LBA-Change 00:20:02.639 Dataset Management (09h): Supported LBA-Change 00:20:02.639 Copy (19h): Supported LBA-Change 00:20:02.639 00:20:02.639 Error Log 00:20:02.639 ========= 00:20:02.639 00:20:02.639 Arbitration 00:20:02.639 =========== 00:20:02.639 Arbitration Burst: 1 00:20:02.639 00:20:02.639 Power Management 00:20:02.639 ================ 00:20:02.639 Number of Power States: 1 00:20:02.639 Current Power State: Power State #0 00:20:02.639 Power State #0: 00:20:02.639 Max Power: 0.00 W 00:20:02.639 Non-Operational State: Operational 00:20:02.639 Entry Latency: Not Reported 00:20:02.639 Exit Latency: Not Reported 00:20:02.639 Relative Read Throughput: 0 00:20:02.639 Relative Read Latency: 0 00:20:02.639 Relative Write Throughput: 0 00:20:02.639 Relative Write Latency: 0 00:20:02.639 Idle Power: Not Reported 00:20:02.639 Active Power: Not Reported 00:20:02.639 Non-Operational Permissive Mode: Not Supported 00:20:02.639 00:20:02.639 Health Information 00:20:02.639 ================== 00:20:02.639 Critical Warnings: 00:20:02.639 Available Spare Space: OK 00:20:02.639 Temperature: OK 00:20:02.639 Device Reliability: OK 00:20:02.639 Read Only: No 00:20:02.639 Volatile Memory Backup: OK 00:20:02.639 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:02.639 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:02.639 Available Spare: 0% 00:20:02.639 Available Spare Threshold: 0% 00:20:02.639 Life Percentage Used:[2024-07-12 11:24:28.654793] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.639 [2024-07-12 11:24:28.654805] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x8b7540) 00:20:02.639 [2024-07-12 11:24:28.654815] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.639 [2024-07-12 11:24:28.654837] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x917e40, cid 7, qid 0 00:20:02.639 [2024-07-12 11:24:28.654968] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.639 [2024-07-12 11:24:28.654982] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.639 [2024-07-12 11:24:28.654989] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.639 [2024-07-12 11:24:28.654996] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x917e40) on tqpair=0x8b7540 00:20:02.639 [2024-07-12 11:24:28.655041] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:02.639 [2024-07-12 11:24:28.655061] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9173c0) on tqpair=0x8b7540 00:20:02.639 [2024-07-12 11:24:28.655072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.639 [2024-07-12 11:24:28.655081] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x917540) on tqpair=0x8b7540 00:20:02.639 [2024-07-12 11:24:28.655089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.639 [2024-07-12 11:24:28.655097] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9176c0) on tqpair=0x8b7540 00:20:02.639 [2024-07-12 11:24:28.655105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.639 [2024-07-12 11:24:28.655113] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x917840) on tqpair=0x8b7540 00:20:02.639 [2024-07-12 11:24:28.655121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.639 [2024-07-12 11:24:28.655133] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.639 [2024-07-12 11:24:28.655141] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.639 [2024-07-12 11:24:28.655148] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b7540) 00:20:02.639 [2024-07-12 11:24:28.655176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.639 [2024-07-12 11:24:28.655199] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x917840, cid 3, qid 0 00:20:02.639 [2024-07-12 11:24:28.655329] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.639 [2024-07-12 11:24:28.655344] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.639 [2024-07-12 11:24:28.655351] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.639 [2024-07-12 11:24:28.655358] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x917840) on tqpair=0x8b7540 00:20:02.639 [2024-07-12 11:24:28.655370] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.639 [2024-07-12 11:24:28.655377] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.639 [2024-07-12 11:24:28.655384] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b7540) 00:20:02.639 [2024-07-12 11:24:28.655394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.639 [2024-07-12 11:24:28.655420] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x917840, cid 3, qid 0 00:20:02.639 [2024-07-12 11:24:28.655515] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.639 [2024-07-12 11:24:28.655528] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.639 [2024-07-12 11:24:28.655535] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.639 [2024-07-12 11:24:28.655542] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x917840) on tqpair=0x8b7540 00:20:02.639 [2024-07-12 11:24:28.655549] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:02.639 [2024-07-12 11:24:28.655557] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:02.639 [2024-07-12 11:24:28.655573] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.639 [2024-07-12 11:24:28.655582] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.639 [2024-07-12 11:24:28.655589] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b7540) 00:20:02.639 [2024-07-12 11:24:28.655600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.639 [2024-07-12 11:24:28.655620] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x917840, cid 3, qid 0 00:20:02.639 [2024-07-12 11:24:28.655694] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.639 [2024-07-12 11:24:28.655706] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.639 [2024-07-12 11:24:28.655713] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.639 [2024-07-12 11:24:28.655720] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x917840) on tqpair=0x8b7540 00:20:02.639 [2024-07-12 11:24:28.655736] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.639 [2024-07-12 11:24:28.655745] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.639 [2024-07-12 11:24:28.655752] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b7540) 00:20:02.639 [2024-07-12 11:24:28.655762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.639 [2024-07-12 11:24:28.655782] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x917840, cid 3, qid 0 00:20:02.639 [2024-07-12 11:24:28.659896] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.639 [2024-07-12 11:24:28.659913] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.639 [2024-07-12 11:24:28.659920] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.639 [2024-07-12 11:24:28.659927] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x917840) on tqpair=0x8b7540 00:20:02.639 [2024-07-12 11:24:28.659943] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.639 [2024-07-12 11:24:28.659953] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.639 [2024-07-12 11:24:28.659963] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b7540) 00:20:02.639 [2024-07-12 11:24:28.659974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.639 [2024-07-12 11:24:28.659996] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x917840, cid 3, qid 0 00:20:02.639 [2024-07-12 11:24:28.660119] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.639 [2024-07-12 11:24:28.660134] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.639 [2024-07-12 11:24:28.660141] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.639 [2024-07-12 11:24:28.660148] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x917840) on tqpair=0x8b7540 00:20:02.639 [2024-07-12 11:24:28.660160] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:20:02.639 0% 00:20:02.639 Data Units Read: 0 00:20:02.639 Data Units Written: 0 00:20:02.639 Host Read Commands: 0 00:20:02.639 Host Write Commands: 0 00:20:02.639 Controller Busy Time: 0 minutes 00:20:02.639 Power Cycles: 0 00:20:02.639 Power On Hours: 0 hours 00:20:02.639 Unsafe Shutdowns: 0 00:20:02.639 Unrecoverable Media Errors: 0 00:20:02.639 Lifetime Error Log Entries: 0 00:20:02.639 Warning Temperature Time: 0 minutes 00:20:02.639 Critical Temperature Time: 0 minutes 00:20:02.639 00:20:02.639 Number of Queues 00:20:02.639 ================ 00:20:02.639 Number of I/O Submission Queues: 127 00:20:02.639 Number of I/O Completion Queues: 127 00:20:02.639 00:20:02.639 Active Namespaces 00:20:02.639 ================= 00:20:02.639 Namespace ID:1 00:20:02.639 Error Recovery Timeout: Unlimited 00:20:02.639 Command Set Identifier: NVM (00h) 00:20:02.639 Deallocate: Supported 00:20:02.639 Deallocated/Unwritten Error: Not Supported 00:20:02.639 Deallocated Read Value: Unknown 00:20:02.639 Deallocate in Write Zeroes: Not Supported 00:20:02.639 Deallocated Guard Field: 0xFFFF 00:20:02.639 Flush: Supported 00:20:02.639 Reservation: Supported 00:20:02.639 Namespace Sharing Capabilities: Multiple Controllers 00:20:02.639 Size (in LBAs): 131072 (0GiB) 00:20:02.639 Capacity (in LBAs): 131072 (0GiB) 00:20:02.639 Utilization (in LBAs): 131072 (0GiB) 00:20:02.639 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:02.639 EUI64: ABCDEF0123456789 00:20:02.639 UUID: e1a35db6-8d0b-4aca-929c-7559eb6ed436 00:20:02.639 Thin Provisioning: Not Supported 00:20:02.639 Per-NS Atomic Units: Yes 00:20:02.639 Atomic Boundary Size (Normal): 0 00:20:02.639 Atomic Boundary Size (PFail): 0 00:20:02.639 Atomic Boundary Offset: 0 00:20:02.639 Maximum Single Source Range Length: 65535 00:20:02.639 Maximum Copy Length: 65535 00:20:02.639 Maximum Source Range Count: 1 00:20:02.639 NGUID/EUI64 Never Reused: No 00:20:02.639 Namespace Write Protected: No 00:20:02.639 Number of LBA Formats: 1 00:20:02.639 Current LBA Format: LBA Format #00 00:20:02.639 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:02.639 00:20:02.639 11:24:28 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:02.639 11:24:28 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:02.639 11:24:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.639 11:24:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:02.639 11:24:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.639 11:24:28 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:02.639 11:24:28 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:02.639 11:24:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:02.639 11:24:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:20:02.639 11:24:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:02.639 11:24:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:20:02.639 11:24:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:02.639 11:24:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:02.639 rmmod nvme_tcp 00:20:02.639 rmmod nvme_fabrics 00:20:02.639 rmmod nvme_keyring 00:20:02.639 11:24:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:02.639 11:24:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:20:02.639 11:24:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:20:02.639 11:24:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 626932 ']' 00:20:02.639 11:24:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 626932 00:20:02.639 11:24:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 626932 ']' 00:20:02.639 11:24:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 626932 00:20:02.639 11:24:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:20:02.639 11:24:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:02.639 11:24:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 626932 00:20:02.921 11:24:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:02.921 11:24:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:02.921 11:24:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 626932' 00:20:02.921 killing process with pid 626932 00:20:02.921 11:24:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 626932 00:20:02.921 11:24:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 626932 00:20:03.181 11:24:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:03.181 11:24:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:03.181 11:24:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:03.181 11:24:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:03.181 11:24:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:03.181 11:24:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:03.181 11:24:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:03.181 11:24:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:05.078 11:24:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:05.078 00:20:05.078 real 0m5.443s 00:20:05.078 user 0m4.562s 00:20:05.078 sys 0m1.809s 00:20:05.078 11:24:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:05.078 11:24:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:05.078 ************************************ 00:20:05.078 END TEST nvmf_identify 00:20:05.078 ************************************ 00:20:05.078 11:24:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:05.078 11:24:31 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:05.078 11:24:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:05.078 11:24:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:05.078 11:24:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:05.078 ************************************ 00:20:05.078 START TEST nvmf_perf 00:20:05.078 ************************************ 00:20:05.078 11:24:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:05.336 * Looking for test storage... 00:20:05.337 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:20:05.337 11:24:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:07.867 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:07.867 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:07.867 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:07.867 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:07.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:07.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:20:07.867 00:20:07.867 --- 10.0.0.2 ping statistics --- 00:20:07.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.867 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:07.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:07.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:20:07.867 00:20:07.867 --- 10.0.0.1 ping statistics --- 00:20:07.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.867 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=629010 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 629010 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 629010 ']' 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:07.867 11:24:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:07.867 [2024-07-12 11:24:33.596115] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:20:07.867 [2024-07-12 11:24:33.596213] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.867 EAL: No free 2048 kB hugepages reported on node 1 00:20:07.867 [2024-07-12 11:24:33.658712] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:07.867 [2024-07-12 11:24:33.767190] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.868 [2024-07-12 11:24:33.767235] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.868 [2024-07-12 11:24:33.767257] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:07.868 [2024-07-12 11:24:33.767268] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:07.868 [2024-07-12 11:24:33.767277] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.868 [2024-07-12 11:24:33.767325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.868 [2024-07-12 11:24:33.767383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:07.868 [2024-07-12 11:24:33.767449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:07.868 [2024-07-12 11:24:33.767452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.434 11:24:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:08.434 11:24:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:20:08.434 11:24:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:08.434 11:24:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:08.434 11:24:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:08.434 11:24:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:08.434 11:24:34 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:20:08.434 11:24:34 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:20:11.710 11:24:37 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:20:11.710 11:24:37 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:11.967 11:24:37 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:20:11.967 11:24:37 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:12.224 11:24:38 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:12.224 11:24:38 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:20:12.224 11:24:38 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:12.224 11:24:38 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:12.224 11:24:38 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:12.481 [2024-07-12 11:24:38.374935] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:12.481 11:24:38 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:12.738 11:24:38 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:12.738 11:24:38 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:12.995 11:24:38 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:12.995 11:24:38 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:13.253 11:24:39 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:13.510 [2024-07-12 11:24:39.390678] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:13.510 11:24:39 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:13.767 11:24:39 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:20:13.767 11:24:39 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:20:13.767 11:24:39 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:13.767 11:24:39 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:20:15.138 Initializing NVMe Controllers 00:20:15.138 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:20:15.138 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:20:15.138 Initialization complete. Launching workers. 00:20:15.138 ======================================================== 00:20:15.138 Latency(us) 00:20:15.138 Device Information : IOPS MiB/s Average min max 00:20:15.138 PCIE (0000:88:00.0) NSID 1 from core 0: 85005.82 332.05 375.93 43.04 4385.87 00:20:15.138 ======================================================== 00:20:15.138 Total : 85005.82 332.05 375.93 43.04 4385.87 00:20:15.138 00:20:15.138 11:24:40 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:15.138 EAL: No free 2048 kB hugepages reported on node 1 00:20:16.069 Initializing NVMe Controllers 00:20:16.070 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:16.070 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:16.070 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:16.070 Initialization complete. Launching workers. 00:20:16.070 ======================================================== 00:20:16.070 Latency(us) 00:20:16.070 Device Information : IOPS MiB/s Average min max 00:20:16.070 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 113.00 0.44 9208.07 138.03 45821.27 00:20:16.070 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.00 0.22 17955.08 5980.95 47906.92 00:20:16.070 ======================================================== 00:20:16.070 Total : 169.00 0.66 12106.49 138.03 47906.92 00:20:16.070 00:20:16.070 11:24:42 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:16.070 EAL: No free 2048 kB hugepages reported on node 1 00:20:17.443 Initializing NVMe Controllers 00:20:17.443 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:17.443 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:17.443 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:17.443 Initialization complete. Launching workers. 00:20:17.443 ======================================================== 00:20:17.443 Latency(us) 00:20:17.443 Device Information : IOPS MiB/s Average min max 00:20:17.443 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8635.62 33.73 3720.15 694.52 7895.30 00:20:17.443 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3825.83 14.94 8413.57 5867.41 16046.34 00:20:17.443 ======================================================== 00:20:17.443 Total : 12461.45 48.68 5161.09 694.52 16046.34 00:20:17.443 00:20:17.443 11:24:43 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:20:17.443 11:24:43 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:20:17.443 11:24:43 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:17.443 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.007 Initializing NVMe Controllers 00:20:20.007 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:20.007 Controller IO queue size 128, less than required. 00:20:20.007 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:20.007 Controller IO queue size 128, less than required. 00:20:20.007 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:20.007 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:20.007 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:20.007 Initialization complete. Launching workers. 00:20:20.007 ======================================================== 00:20:20.007 Latency(us) 00:20:20.007 Device Information : IOPS MiB/s Average min max 00:20:20.007 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1536.94 384.23 84597.22 61901.04 152892.19 00:20:20.007 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 593.48 148.37 230260.24 120850.03 357853.45 00:20:20.007 ======================================================== 00:20:20.007 Total : 2130.41 532.60 125175.02 61901.04 357853.45 00:20:20.007 00:20:20.007 11:24:45 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:20.007 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.007 No valid NVMe controllers or AIO or URING devices found 00:20:20.007 Initializing NVMe Controllers 00:20:20.007 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:20.007 Controller IO queue size 128, less than required. 00:20:20.007 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:20.007 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:20.007 Controller IO queue size 128, less than required. 00:20:20.007 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:20.007 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:20:20.007 WARNING: Some requested NVMe devices were skipped 00:20:20.007 11:24:46 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:20.007 EAL: No free 2048 kB hugepages reported on node 1 00:20:22.537 Initializing NVMe Controllers 00:20:22.537 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:22.537 Controller IO queue size 128, less than required. 00:20:22.537 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:22.537 Controller IO queue size 128, less than required. 00:20:22.537 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:22.537 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:22.537 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:22.537 Initialization complete. Launching workers. 00:20:22.537 00:20:22.537 ==================== 00:20:22.537 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:22.537 TCP transport: 00:20:22.537 polls: 12129 00:20:22.537 idle_polls: 8715 00:20:22.537 sock_completions: 3414 00:20:22.537 nvme_completions: 6181 00:20:22.537 submitted_requests: 9224 00:20:22.537 queued_requests: 1 00:20:22.537 00:20:22.537 ==================== 00:20:22.537 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:22.537 TCP transport: 00:20:22.537 polls: 12524 00:20:22.537 idle_polls: 9183 00:20:22.537 sock_completions: 3341 00:20:22.537 nvme_completions: 6035 00:20:22.537 submitted_requests: 9044 00:20:22.537 queued_requests: 1 00:20:22.537 ======================================================== 00:20:22.537 Latency(us) 00:20:22.537 Device Information : IOPS MiB/s Average min max 00:20:22.537 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1543.94 385.98 85128.81 40999.61 146104.78 00:20:22.537 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1507.46 376.87 86078.05 41198.70 134675.42 00:20:22.537 ======================================================== 00:20:22.537 Total : 3051.40 762.85 85597.76 40999.61 146104.78 00:20:22.537 00:20:22.537 11:24:48 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:20:22.537 11:24:48 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:22.796 11:24:48 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:20:22.796 11:24:48 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:22.796 11:24:48 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:20:22.796 11:24:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:22.796 11:24:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:20:22.796 11:24:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:22.796 11:24:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:20:22.796 11:24:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:22.796 11:24:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:22.796 rmmod nvme_tcp 00:20:22.796 rmmod nvme_fabrics 00:20:22.796 rmmod nvme_keyring 00:20:22.796 11:24:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:22.796 11:24:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:20:22.796 11:24:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:20:22.796 11:24:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 629010 ']' 00:20:22.796 11:24:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 629010 00:20:22.796 11:24:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 629010 ']' 00:20:22.796 11:24:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 629010 00:20:22.796 11:24:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:20:22.796 11:24:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:22.796 11:24:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 629010 00:20:22.796 11:24:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:22.796 11:24:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:22.796 11:24:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 629010' 00:20:22.796 killing process with pid 629010 00:20:22.796 11:24:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 629010 00:20:22.796 11:24:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 629010 00:20:24.693 11:24:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:24.693 11:24:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:24.693 11:24:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:24.693 11:24:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:24.693 11:24:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:24.693 11:24:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.693 11:24:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:24.693 11:24:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.597 11:24:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:26.597 00:20:26.597 real 0m21.369s 00:20:26.597 user 1m5.059s 00:20:26.597 sys 0m5.618s 00:20:26.597 11:24:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:26.597 11:24:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:26.597 ************************************ 00:20:26.597 END TEST nvmf_perf 00:20:26.597 ************************************ 00:20:26.597 11:24:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:26.597 11:24:52 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:26.597 11:24:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:26.597 11:24:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:26.597 11:24:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:26.597 ************************************ 00:20:26.597 START TEST nvmf_fio_host 00:20:26.597 ************************************ 00:20:26.597 11:24:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:26.597 * Looking for test storage... 00:20:26.597 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:26.597 11:24:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:26.597 11:24:52 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:26.597 11:24:52 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:26.597 11:24:52 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:26.597 11:24:52 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.597 11:24:52 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.597 11:24:52 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.597 11:24:52 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:26.597 11:24:52 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.597 11:24:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:26.597 11:24:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:20:26.597 11:24:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:26.597 11:24:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:26.597 11:24:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:26.597 11:24:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:26.597 11:24:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:26.597 11:24:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:26.597 11:24:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:26.597 11:24:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:26.597 11:24:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:26.597 11:24:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:26.597 11:24:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.597 11:24:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.597 11:24:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:26.597 11:24:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:26.597 11:24:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:26.597 11:24:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:26.597 11:24:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:26.597 11:24:52 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:26.597 11:24:52 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:26.597 11:24:52 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:26.598 11:24:52 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.598 11:24:52 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.598 11:24:52 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.598 11:24:52 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:26.598 11:24:52 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.598 11:24:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:20:26.598 11:24:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:26.598 11:24:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:26.598 11:24:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:26.598 11:24:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:26.598 11:24:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:26.598 11:24:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:26.598 11:24:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:26.598 11:24:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:26.598 11:24:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:26.598 11:24:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:20:26.598 11:24:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:26.598 11:24:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:26.598 11:24:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:26.598 11:24:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:26.598 11:24:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:26.598 11:24:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.598 11:24:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:26.598 11:24:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.598 11:24:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:26.598 11:24:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:26.598 11:24:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:20:26.598 11:24:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.495 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:28.495 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:20:28.495 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:28.495 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:28.496 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:28.496 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:28.496 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:28.496 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:28.496 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:28.754 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:28.754 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:28.754 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:28.754 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:28.754 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:28.754 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:28.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:28.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:20:28.754 00:20:28.754 --- 10.0.0.2 ping statistics --- 00:20:28.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.754 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:20:28.754 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:28.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:28.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:20:28.754 00:20:28.754 --- 10.0.0.1 ping statistics --- 00:20:28.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.754 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:20:28.754 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:28.754 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:20:28.754 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:28.754 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:28.754 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:28.754 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:28.754 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:28.754 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:28.754 11:24:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:28.754 11:24:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:20:28.754 11:24:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:20:28.754 11:24:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:28.754 11:24:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.754 11:24:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=634655 00:20:28.754 11:24:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:28.754 11:24:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:28.754 11:24:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 634655 00:20:28.754 11:24:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 634655 ']' 00:20:28.754 11:24:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.754 11:24:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:28.754 11:24:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.754 11:24:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:28.754 11:24:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.754 [2024-07-12 11:24:54.758324] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:20:28.754 [2024-07-12 11:24:54.758423] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:28.754 EAL: No free 2048 kB hugepages reported on node 1 00:20:28.754 [2024-07-12 11:24:54.843473] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:29.013 [2024-07-12 11:24:54.982974] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:29.013 [2024-07-12 11:24:54.983038] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:29.013 [2024-07-12 11:24:54.983064] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:29.013 [2024-07-12 11:24:54.983088] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:29.013 [2024-07-12 11:24:54.983121] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:29.013 [2024-07-12 11:24:54.983243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:29.013 [2024-07-12 11:24:54.983306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:29.013 [2024-07-12 11:24:54.983392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.013 [2024-07-12 11:24:54.983381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:29.013 11:24:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:29.013 11:24:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:20:29.013 11:24:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:29.271 [2024-07-12 11:24:55.389408] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:29.529 11:24:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:20:29.529 11:24:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:29.529 11:24:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.529 11:24:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:29.787 Malloc1 00:20:29.787 11:24:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:30.044 11:24:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:30.302 11:24:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:30.560 [2024-07-12 11:24:56.548799] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:30.560 11:24:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:30.818 11:24:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:20:30.818 11:24:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:30.818 11:24:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:30.818 11:24:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:30.818 11:24:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:30.818 11:24:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:30.818 11:24:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:30.818 11:24:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:20:30.818 11:24:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:30.818 11:24:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:30.818 11:24:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:30.818 11:24:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:20:30.818 11:24:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:30.818 11:24:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:30.818 11:24:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:30.818 11:24:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:30.818 11:24:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:30.818 11:24:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:30.818 11:24:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:30.818 11:24:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:30.818 11:24:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:30.818 11:24:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:30.818 11:24:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:31.076 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:31.076 fio-3.35 00:20:31.076 Starting 1 thread 00:20:31.076 EAL: No free 2048 kB hugepages reported on node 1 00:20:33.602 00:20:33.602 test: (groupid=0, jobs=1): err= 0: pid=637199: Fri Jul 12 11:24:59 2024 00:20:33.602 read: IOPS=8836, BW=34.5MiB/s (36.2MB/s)(69.3MiB/2007msec) 00:20:33.602 slat (nsec): min=1905, max=161216, avg=2495.19, stdev=1775.20 00:20:33.602 clat (usec): min=2243, max=13689, avg=7926.98, stdev=650.80 00:20:33.602 lat (usec): min=2273, max=13692, avg=7929.47, stdev=650.67 00:20:33.602 clat percentiles (usec): 00:20:33.602 | 1.00th=[ 6456], 5.00th=[ 6915], 10.00th=[ 7111], 20.00th=[ 7439], 00:20:33.602 | 30.00th=[ 7635], 40.00th=[ 7767], 50.00th=[ 7963], 60.00th=[ 8094], 00:20:33.602 | 70.00th=[ 8291], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 8979], 00:20:33.602 | 99.00th=[ 9241], 99.50th=[ 9372], 99.90th=[11731], 99.95th=[12518], 00:20:33.602 | 99.99th=[13698] 00:20:33.602 bw ( KiB/s): min=34320, max=35896, per=100.00%, avg=35344.00, stdev=702.76, samples=4 00:20:33.602 iops : min= 8580, max= 8974, avg=8836.00, stdev=175.69, samples=4 00:20:33.602 write: IOPS=8852, BW=34.6MiB/s (36.3MB/s)(69.4MiB/2007msec); 0 zone resets 00:20:33.602 slat (usec): min=2, max=156, avg= 2.66, stdev= 1.53 00:20:33.602 clat (usec): min=1640, max=13561, avg=6502.90, stdev=560.58 00:20:33.602 lat (usec): min=1649, max=13564, avg=6505.56, stdev=560.51 00:20:33.602 clat percentiles (usec): 00:20:33.602 | 1.00th=[ 5276], 5.00th=[ 5669], 10.00th=[ 5866], 20.00th=[ 6128], 00:20:33.602 | 30.00th=[ 6259], 40.00th=[ 6390], 50.00th=[ 6521], 60.00th=[ 6652], 00:20:33.602 | 70.00th=[ 6783], 80.00th=[ 6915], 90.00th=[ 7111], 95.00th=[ 7308], 00:20:33.602 | 99.00th=[ 7635], 99.50th=[ 7832], 99.90th=[11338], 99.95th=[12518], 00:20:33.602 | 99.99th=[12911] 00:20:33.602 bw ( KiB/s): min=35080, max=35712, per=99.98%, avg=35402.00, stdev=312.61, samples=4 00:20:33.602 iops : min= 8770, max= 8928, avg=8850.50, stdev=78.15, samples=4 00:20:33.603 lat (msec) : 2=0.02%, 4=0.11%, 10=99.67%, 20=0.20% 00:20:33.603 cpu : usr=62.51%, sys=35.79%, ctx=82, majf=0, minf=41 00:20:33.603 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:33.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:33.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:33.603 issued rwts: total=17734,17766,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:33.603 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:33.603 00:20:33.603 Run status group 0 (all jobs): 00:20:33.603 READ: bw=34.5MiB/s (36.2MB/s), 34.5MiB/s-34.5MiB/s (36.2MB/s-36.2MB/s), io=69.3MiB (72.6MB), run=2007-2007msec 00:20:33.603 WRITE: bw=34.6MiB/s (36.3MB/s), 34.6MiB/s-34.6MiB/s (36.3MB/s-36.3MB/s), io=69.4MiB (72.8MB), run=2007-2007msec 00:20:33.603 11:24:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:33.603 11:24:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:33.603 11:24:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:33.603 11:24:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:33.603 11:24:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:33.603 11:24:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:33.603 11:24:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:20:33.603 11:24:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:33.603 11:24:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:33.603 11:24:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:33.603 11:24:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:20:33.603 11:24:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:33.603 11:24:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:33.603 11:24:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:33.603 11:24:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:33.603 11:24:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:33.603 11:24:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:33.603 11:24:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:33.603 11:24:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:33.603 11:24:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:33.603 11:24:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:33.603 11:24:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:33.603 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:33.603 fio-3.35 00:20:33.603 Starting 1 thread 00:20:33.603 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.128 00:20:36.128 test: (groupid=0, jobs=1): err= 0: pid=637527: Fri Jul 12 11:25:02 2024 00:20:36.128 read: IOPS=8375, BW=131MiB/s (137MB/s)(263MiB/2008msec) 00:20:36.128 slat (usec): min=2, max=115, avg= 3.78, stdev= 1.92 00:20:36.128 clat (usec): min=1436, max=16621, avg=8713.19, stdev=1967.36 00:20:36.128 lat (usec): min=1440, max=16625, avg=8716.97, stdev=1967.38 00:20:36.128 clat percentiles (usec): 00:20:36.128 | 1.00th=[ 4555], 5.00th=[ 5473], 10.00th=[ 6259], 20.00th=[ 7046], 00:20:36.128 | 30.00th=[ 7635], 40.00th=[ 8225], 50.00th=[ 8717], 60.00th=[ 9241], 00:20:36.128 | 70.00th=[ 9765], 80.00th=[10290], 90.00th=[11076], 95.00th=[11994], 00:20:36.128 | 99.00th=[13829], 99.50th=[14091], 99.90th=[14615], 99.95th=[15533], 00:20:36.128 | 99.99th=[16581] 00:20:36.128 bw ( KiB/s): min=61728, max=79360, per=52.44%, avg=70272.00, stdev=7411.62, samples=4 00:20:36.128 iops : min= 3858, max= 4960, avg=4392.00, stdev=463.23, samples=4 00:20:36.128 write: IOPS=4968, BW=77.6MiB/s (81.4MB/s)(144MiB/1851msec); 0 zone resets 00:20:36.128 slat (usec): min=30, max=195, avg=34.36, stdev= 6.02 00:20:36.128 clat (usec): min=5093, max=20550, avg=11311.06, stdev=2041.96 00:20:36.128 lat (usec): min=5124, max=20596, avg=11345.42, stdev=2042.25 00:20:36.128 clat percentiles (usec): 00:20:36.128 | 1.00th=[ 7504], 5.00th=[ 8356], 10.00th=[ 8848], 20.00th=[ 9503], 00:20:36.128 | 30.00th=[10028], 40.00th=[10552], 50.00th=[11207], 60.00th=[11600], 00:20:36.128 | 70.00th=[12125], 80.00th=[13042], 90.00th=[14091], 95.00th=[15139], 00:20:36.128 | 99.00th=[16581], 99.50th=[17171], 99.90th=[19530], 99.95th=[19792], 00:20:36.128 | 99.99th=[20579] 00:20:36.128 bw ( KiB/s): min=63808, max=81728, per=91.83%, avg=72992.00, stdev=7598.77, samples=4 00:20:36.128 iops : min= 3988, max= 5108, avg=4562.00, stdev=474.92, samples=4 00:20:36.128 lat (msec) : 2=0.06%, 4=0.17%, 10=58.99%, 20=40.78%, 50=0.01% 00:20:36.128 cpu : usr=77.68%, sys=21.03%, ctx=41, majf=0, minf=65 00:20:36.128 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:20:36.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.128 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:36.128 issued rwts: total=16819,9196,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.128 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:36.128 00:20:36.128 Run status group 0 (all jobs): 00:20:36.128 READ: bw=131MiB/s (137MB/s), 131MiB/s-131MiB/s (137MB/s-137MB/s), io=263MiB (276MB), run=2008-2008msec 00:20:36.128 WRITE: bw=77.6MiB/s (81.4MB/s), 77.6MiB/s-77.6MiB/s (81.4MB/s-81.4MB/s), io=144MiB (151MB), run=1851-1851msec 00:20:36.128 11:25:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:36.385 11:25:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:20:36.385 11:25:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:36.385 11:25:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:20:36.385 11:25:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:20:36.385 11:25:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:36.385 11:25:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:20:36.385 11:25:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:36.385 11:25:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:20:36.385 11:25:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:36.385 11:25:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:36.385 rmmod nvme_tcp 00:20:36.385 rmmod nvme_fabrics 00:20:36.385 rmmod nvme_keyring 00:20:36.385 11:25:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:36.385 11:25:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:20:36.385 11:25:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:20:36.385 11:25:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 634655 ']' 00:20:36.385 11:25:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 634655 00:20:36.385 11:25:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 634655 ']' 00:20:36.385 11:25:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 634655 00:20:36.385 11:25:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:20:36.385 11:25:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:36.385 11:25:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 634655 00:20:36.385 11:25:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:36.385 11:25:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:36.385 11:25:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 634655' 00:20:36.386 killing process with pid 634655 00:20:36.386 11:25:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 634655 00:20:36.386 11:25:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 634655 00:20:36.643 11:25:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:36.643 11:25:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:36.643 11:25:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:36.643 11:25:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:36.643 11:25:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:36.643 11:25:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.643 11:25:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:36.643 11:25:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.173 11:25:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:39.173 00:20:39.173 real 0m12.189s 00:20:39.173 user 0m36.395s 00:20:39.173 sys 0m4.065s 00:20:39.173 11:25:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:39.173 11:25:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.173 ************************************ 00:20:39.173 END TEST nvmf_fio_host 00:20:39.173 ************************************ 00:20:39.173 11:25:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:39.173 11:25:04 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:39.173 11:25:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:39.173 11:25:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:39.173 11:25:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:39.173 ************************************ 00:20:39.173 START TEST nvmf_failover 00:20:39.173 ************************************ 00:20:39.173 11:25:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:39.173 * Looking for test storage... 00:20:39.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:39.173 11:25:04 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:39.173 11:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:20:39.173 11:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:20:39.174 11:25:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:41.073 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:41.073 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:41.073 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:41.073 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:20:41.073 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:41.074 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:41.074 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:41.074 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:41.074 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:41.074 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:41.074 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:41.074 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:41.074 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:41.074 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:41.074 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:41.074 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:41.074 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:41.074 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:41.074 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:41.074 11:25:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:41.074 11:25:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:41.074 11:25:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:41.074 11:25:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:41.074 11:25:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:41.074 11:25:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:41.074 11:25:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:41.074 11:25:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:41.074 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:41.074 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:20:41.074 00:20:41.074 --- 10.0.0.2 ping statistics --- 00:20:41.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:41.074 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:20:41.074 11:25:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:41.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:41.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:20:41.074 00:20:41.074 --- 10.0.0.1 ping statistics --- 00:20:41.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:41.074 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:20:41.074 11:25:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:41.074 11:25:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:20:41.074 11:25:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:41.074 11:25:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:41.074 11:25:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:41.074 11:25:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:41.074 11:25:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:41.074 11:25:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:41.074 11:25:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:41.074 11:25:07 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:20:41.074 11:25:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:41.074 11:25:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:41.074 11:25:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:41.074 11:25:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=639802 00:20:41.074 11:25:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:41.074 11:25:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 639802 00:20:41.074 11:25:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 639802 ']' 00:20:41.074 11:25:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:41.074 11:25:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:41.074 11:25:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:41.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:41.074 11:25:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:41.074 11:25:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:41.074 [2024-07-12 11:25:07.173262] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:20:41.074 [2024-07-12 11:25:07.173331] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:41.331 EAL: No free 2048 kB hugepages reported on node 1 00:20:41.331 [2024-07-12 11:25:07.236638] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:41.331 [2024-07-12 11:25:07.344224] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:41.331 [2024-07-12 11:25:07.344274] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:41.331 [2024-07-12 11:25:07.344288] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:41.332 [2024-07-12 11:25:07.344299] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:41.332 [2024-07-12 11:25:07.344309] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:41.332 [2024-07-12 11:25:07.344394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:41.332 [2024-07-12 11:25:07.344458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:41.332 [2024-07-12 11:25:07.344461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:41.332 11:25:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:41.332 11:25:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:20:41.332 11:25:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:41.332 11:25:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:41.332 11:25:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:41.589 11:25:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:41.589 11:25:07 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:41.846 [2024-07-12 11:25:07.760651] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:41.846 11:25:07 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:42.112 Malloc0 00:20:42.112 11:25:08 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:42.424 11:25:08 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:42.682 11:25:08 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:42.939 [2024-07-12 11:25:08.883254] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:42.939 11:25:08 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:43.196 [2024-07-12 11:25:09.123947] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:43.196 11:25:09 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:20:43.454 [2024-07-12 11:25:09.376764] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:20:43.454 11:25:09 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=640080 00:20:43.454 11:25:09 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:20:43.454 11:25:09 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:43.454 11:25:09 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 640080 /var/tmp/bdevperf.sock 00:20:43.454 11:25:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 640080 ']' 00:20:43.454 11:25:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:43.454 11:25:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:43.454 11:25:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:43.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:43.454 11:25:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:43.454 11:25:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:43.711 11:25:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:43.711 11:25:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:20:43.711 11:25:09 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:44.275 NVMe0n1 00:20:44.276 11:25:10 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:44.533 00:20:44.533 11:25:10 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=640212 00:20:44.533 11:25:10 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:44.533 11:25:10 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:20:45.465 11:25:11 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:45.723 [2024-07-12 11:25:11.797541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226f070 is same with the state(5) to be set 00:20:45.723 [2024-07-12 11:25:11.797609] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226f070 is same with the state(5) to be set 00:20:45.723 [2024-07-12 11:25:11.797626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226f070 is same with the state(5) to be set 00:20:45.723 [2024-07-12 11:25:11.797638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226f070 is same with the state(5) to be set 00:20:45.723 [2024-07-12 11:25:11.797650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226f070 is same with the state(5) to be set 00:20:45.723 [2024-07-12 11:25:11.797662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226f070 is same with the state(5) to be set 00:20:45.723 [2024-07-12 11:25:11.797674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226f070 is same with the state(5) to be set 00:20:45.723 [2024-07-12 11:25:11.797686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226f070 is same with the state(5) to be set 00:20:45.723 [2024-07-12 11:25:11.797697] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226f070 is same with the state(5) to be set 00:20:45.723 [2024-07-12 11:25:11.797708] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226f070 is same with the state(5) to be set 00:20:45.723 [2024-07-12 11:25:11.797729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226f070 is same with the state(5) to be set 00:20:45.723 [2024-07-12 11:25:11.797741] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226f070 is same with the state(5) to be set 00:20:45.723 [2024-07-12 11:25:11.797753] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226f070 is same with the state(5) to be set 00:20:45.723 [2024-07-12 11:25:11.797766] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226f070 is same with the state(5) to be set 00:20:45.723 11:25:11 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:20:49.000 11:25:14 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:49.259 00:20:49.259 11:25:15 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:49.518 [2024-07-12 11:25:15.513934] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.518 [2024-07-12 11:25:15.513997] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.518 [2024-07-12 11:25:15.514013] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.518 [2024-07-12 11:25:15.514026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.518 [2024-07-12 11:25:15.514038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.518 [2024-07-12 11:25:15.514049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.518 [2024-07-12 11:25:15.514070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.518 [2024-07-12 11:25:15.514082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.518 [2024-07-12 11:25:15.514094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.518 [2024-07-12 11:25:15.514105] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.518 [2024-07-12 11:25:15.514117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.518 [2024-07-12 11:25:15.514140] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.518 [2024-07-12 11:25:15.514152] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.518 [2024-07-12 11:25:15.514164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.518 [2024-07-12 11:25:15.514176] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.518 [2024-07-12 11:25:15.514187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.518 [2024-07-12 11:25:15.514199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.518 [2024-07-12 11:25:15.514212] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.519 [2024-07-12 11:25:15.514224] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.519 [2024-07-12 11:25:15.514237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.519 [2024-07-12 11:25:15.514260] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.519 [2024-07-12 11:25:15.514273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.519 [2024-07-12 11:25:15.514286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.519 [2024-07-12 11:25:15.514298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.519 [2024-07-12 11:25:15.514311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.519 [2024-07-12 11:25:15.514323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.519 [2024-07-12 11:25:15.514335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.519 [2024-07-12 11:25:15.514347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.519 [2024-07-12 11:25:15.514362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.519 [2024-07-12 11:25:15.514391] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.519 [2024-07-12 11:25:15.514405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.519 [2024-07-12 11:25:15.514417] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.519 [2024-07-12 11:25:15.514429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.519 [2024-07-12 11:25:15.514441] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.519 [2024-07-12 11:25:15.514452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.519 [2024-07-12 11:25:15.514468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.519 [2024-07-12 11:25:15.514494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.519 [2024-07-12 11:25:15.514507] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.519 [2024-07-12 11:25:15.514518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.519 [2024-07-12 11:25:15.514531] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.519 [2024-07-12 11:25:15.514543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.519 [2024-07-12 11:25:15.514556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.519 [2024-07-12 11:25:15.514568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.519 [2024-07-12 11:25:15.514580] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.519 [2024-07-12 11:25:15.514591] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270640 is same with the state(5) to be set 00:20:49.519 11:25:15 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:20:52.803 11:25:18 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:52.803 [2024-07-12 11:25:18.817030] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:52.803 11:25:18 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:20:53.738 11:25:19 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:20:53.996 [2024-07-12 11:25:20.071631] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.996 [2024-07-12 11:25:20.071687] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.996 [2024-07-12 11:25:20.071709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.996 [2024-07-12 11:25:20.071722] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.996 [2024-07-12 11:25:20.071735] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.996 [2024-07-12 11:25:20.071747] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.996 [2024-07-12 11:25:20.071759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.996 [2024-07-12 11:25:20.071771] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.996 [2024-07-12 11:25:20.071783] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.996 [2024-07-12 11:25:20.071795] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.996 [2024-07-12 11:25:20.071808] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.996 [2024-07-12 11:25:20.071821] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.996 [2024-07-12 11:25:20.071833] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.996 [2024-07-12 11:25:20.071856] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.996 [2024-07-12 11:25:20.071876] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.996 [2024-07-12 11:25:20.071890] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.996 [2024-07-12 11:25:20.071903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.996 [2024-07-12 11:25:20.071915] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.996 [2024-07-12 11:25:20.071928] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.996 [2024-07-12 11:25:20.071942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.996 [2024-07-12 11:25:20.071954] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.996 [2024-07-12 11:25:20.071967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.996 [2024-07-12 11:25:20.071979] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.996 [2024-07-12 11:25:20.071992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.996 [2024-07-12 11:25:20.072005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.996 [2024-07-12 11:25:20.072032] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.996 [2024-07-12 11:25:20.072045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.996 [2024-07-12 11:25:20.072056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.996 [2024-07-12 11:25:20.072068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.996 [2024-07-12 11:25:20.072080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.997 [2024-07-12 11:25:20.072092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.997 [2024-07-12 11:25:20.072103] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.997 [2024-07-12 11:25:20.072115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.997 [2024-07-12 11:25:20.072126] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.997 [2024-07-12 11:25:20.072139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.997 [2024-07-12 11:25:20.072155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.997 [2024-07-12 11:25:20.072167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.997 [2024-07-12 11:25:20.072179] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.997 [2024-07-12 11:25:20.072190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.997 [2024-07-12 11:25:20.072203] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.997 [2024-07-12 11:25:20.072214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.997 [2024-07-12 11:25:20.072226] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.997 [2024-07-12 11:25:20.072238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.997 [2024-07-12 11:25:20.072264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270e70 is same with the state(5) to be set 00:20:53.997 11:25:20 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 640212 00:21:00.555 0 00:21:00.555 11:25:25 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 640080 00:21:00.555 11:25:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 640080 ']' 00:21:00.555 11:25:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 640080 00:21:00.555 11:25:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:00.555 11:25:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:00.555 11:25:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 640080 00:21:00.555 11:25:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:00.555 11:25:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:00.555 11:25:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 640080' 00:21:00.555 killing process with pid 640080 00:21:00.555 11:25:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 640080 00:21:00.555 11:25:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 640080 00:21:00.555 11:25:25 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:00.555 [2024-07-12 11:25:09.440187] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:21:00.555 [2024-07-12 11:25:09.440282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid640080 ] 00:21:00.555 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.555 [2024-07-12 11:25:09.499670] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.555 [2024-07-12 11:25:09.609669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.555 Running I/O for 15 seconds... 00:21:00.555 [2024-07-12 11:25:11.798082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.555 [2024-07-12 11:25:11.798123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.555 [2024-07-12 11:25:11.798153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.555 [2024-07-12 11:25:11.798170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.555 [2024-07-12 11:25:11.798187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.555 [2024-07-12 11:25:11.798202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.555 [2024-07-12 11:25:11.798219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.555 [2024-07-12 11:25:11.798233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.555 [2024-07-12 11:25:11.798249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.555 [2024-07-12 11:25:11.798263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.555 [2024-07-12 11:25:11.798279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.555 [2024-07-12 11:25:11.798294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.555 [2024-07-12 11:25:11.798309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.555 [2024-07-12 11:25:11.798324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.555 [2024-07-12 11:25:11.798339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.555 [2024-07-12 11:25:11.798354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.555 [2024-07-12 11:25:11.798369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.555 [2024-07-12 11:25:11.798384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.555 [2024-07-12 11:25:11.798415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.555 [2024-07-12 11:25:11.798429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.555 [2024-07-12 11:25:11.798445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.555 [2024-07-12 11:25:11.798473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.555 [2024-07-12 11:25:11.798496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.555 [2024-07-12 11:25:11.798510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.555 [2024-07-12 11:25:11.798525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.555 [2024-07-12 11:25:11.798538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.555 [2024-07-12 11:25:11.798552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.555 [2024-07-12 11:25:11.798566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.555 [2024-07-12 11:25:11.798580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.555 [2024-07-12 11:25:11.798594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.555 [2024-07-12 11:25:11.798609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.555 [2024-07-12 11:25:11.798622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.798639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.556 [2024-07-12 11:25:11.798654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.798671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.556 [2024-07-12 11:25:11.798685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.798701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.556 [2024-07-12 11:25:11.798715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.798731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.556 [2024-07-12 11:25:11.798745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.798760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.556 [2024-07-12 11:25:11.798773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.798788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.556 [2024-07-12 11:25:11.798801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.798816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.556 [2024-07-12 11:25:11.798829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.798844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.556 [2024-07-12 11:25:11.798885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.798903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.556 [2024-07-12 11:25:11.798933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.798948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.556 [2024-07-12 11:25:11.798963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.798979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.556 [2024-07-12 11:25:11.798993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.799009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.556 [2024-07-12 11:25:11.799023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.799039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.556 [2024-07-12 11:25:11.799053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.799069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.556 [2024-07-12 11:25:11.799084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.799099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.556 [2024-07-12 11:25:11.799113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.799129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.556 [2024-07-12 11:25:11.799143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.799160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.556 [2024-07-12 11:25:11.799189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.799205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.556 [2024-07-12 11:25:11.799219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.799250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.556 [2024-07-12 11:25:11.799264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.799278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.556 [2024-07-12 11:25:11.799291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.799310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.556 [2024-07-12 11:25:11.799324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.799339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.556 [2024-07-12 11:25:11.799352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.799367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.556 [2024-07-12 11:25:11.799380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.799395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.556 [2024-07-12 11:25:11.799408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.799423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:79408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.556 [2024-07-12 11:25:11.799436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.799450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:79416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.556 [2024-07-12 11:25:11.799464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.799479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:79424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.556 [2024-07-12 11:25:11.799492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.799507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:79432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.556 [2024-07-12 11:25:11.799520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.799535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:79440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.556 [2024-07-12 11:25:11.799548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.799563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.556 [2024-07-12 11:25:11.799576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.799591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.556 [2024-07-12 11:25:11.799604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.799619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:79464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.556 [2024-07-12 11:25:11.799632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.799646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:79472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.556 [2024-07-12 11:25:11.799663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.799678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:79480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.556 [2024-07-12 11:25:11.799692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.799707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.556 [2024-07-12 11:25:11.799721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.799735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:79496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.556 [2024-07-12 11:25:11.799749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.799763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.556 [2024-07-12 11:25:11.799776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.799791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.556 [2024-07-12 11:25:11.799804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.799819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.556 [2024-07-12 11:25:11.799832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.799847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.556 [2024-07-12 11:25:11.799860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.799899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.556 [2024-07-12 11:25:11.799915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.799930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:79544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.556 [2024-07-12 11:25:11.799944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.556 [2024-07-12 11:25:11.799959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:79552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.557 [2024-07-12 11:25:11.799973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.557 [2024-07-12 11:25:11.799988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:79560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.557 [2024-07-12 11:25:11.800001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.557 [2024-07-12 11:25:11.800017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.557 [2024-07-12 11:25:11.800030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.557 [2024-07-12 11:25:11.800045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.557 [2024-07-12 11:25:11.800063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.557 [2024-07-12 11:25:11.800078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.557 [2024-07-12 11:25:11.800092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.557 [2024-07-12 11:25:11.800112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:79592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.557 [2024-07-12 11:25:11.800125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.557 [2024-07-12 11:25:11.800140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.557 [2024-07-12 11:25:11.800154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.557 [2024-07-12 11:25:11.800169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.557 [2024-07-12 11:25:11.800198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.557 [2024-07-12 11:25:11.800213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.557 [2024-07-12 11:25:11.800226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.557 [2024-07-12 11:25:11.800240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:79616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.557 [2024-07-12 11:25:11.800253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.557 [2024-07-12 11:25:11.800268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:79624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.557 [2024-07-12 11:25:11.800281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.557 [2024-07-12 11:25:11.800296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:79632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.557 [2024-07-12 11:25:11.800309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.557 [2024-07-12 11:25:11.800323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.557 [2024-07-12 11:25:11.800336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.557 [2024-07-12 11:25:11.800352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.557 [2024-07-12 11:25:11.800365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.557 [2024-07-12 11:25:11.800380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.557 [2024-07-12 11:25:11.800394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.557 [2024-07-12 11:25:11.800409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.557 [2024-07-12 11:25:11.800422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.557 [2024-07-12 11:25:11.800440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:79672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.557 [2024-07-12 11:25:11.800454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.557 [2024-07-12 11:25:11.800469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.557 [2024-07-12 11:25:11.800482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.557 [2024-07-12 11:25:11.800497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:79688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.557 [2024-07-12 11:25:11.800511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.557 [2024-07-12 11:25:11.800525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:79696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.557 [2024-07-12 11:25:11.800538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.557 [2024-07-12 11:25:11.800553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.557 [2024-07-12 11:25:11.800566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.557 [2024-07-12 11:25:11.800580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:79712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.557 [2024-07-12 11:25:11.800593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.557 [2024-07-12 11:25:11.800614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.557 [2024-07-12 11:25:11.800628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.557 [2024-07-12 11:25:11.800643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.557 [2024-07-12 11:25:11.800656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.557 [2024-07-12 11:25:11.800686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.557 [2024-07-12 11:25:11.800700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.557 [2024-07-12 11:25:11.800715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.557 [2024-07-12 11:25:11.800729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.557 [2024-07-12 11:25:11.800744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.557 [2024-07-12 11:25:11.800758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.557 [2024-07-12 11:25:11.800773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.557 [2024-07-12 11:25:11.800788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.557 [2024-07-12 11:25:11.800803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.557 [2024-07-12 11:25:11.800821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.557 [2024-07-12 11:25:11.800837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.557 [2024-07-12 11:25:11.800851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.557 [2024-07-12 11:25:11.800874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.557 [2024-07-12 11:25:11.800905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.557 [2024-07-12 11:25:11.800921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.557 [2024-07-12 11:25:11.800935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.557 [2024-07-12 11:25:11.800953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.557 [2024-07-12 11:25:11.800968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.557 [2024-07-12 11:25:11.800985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.557 [2024-07-12 11:25:11.801000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.557 [2024-07-12 11:25:11.801016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.557 [2024-07-12 11:25:11.801031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.557 [2024-07-12 11:25:11.801046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:79824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.557 [2024-07-12 11:25:11.801061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.557 [2024-07-12 11:25:11.801078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:79832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.557 [2024-07-12 11:25:11.801093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.557 [2024-07-12 11:25:11.801109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.557 [2024-07-12 11:25:11.801123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.557 [2024-07-12 11:25:11.801145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.557 [2024-07-12 11:25:11.801161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.557 [2024-07-12 11:25:11.801177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.557 [2024-07-12 11:25:11.801207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.557 [2024-07-12 11:25:11.801222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.557 [2024-07-12 11:25:11.801236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.557 [2024-07-12 11:25:11.801256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.557 [2024-07-12 11:25:11.801270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.557 [2024-07-12 11:25:11.801286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:79880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.558 [2024-07-12 11:25:11.801299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.558 [2024-07-12 11:25:11.801315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.558 [2024-07-12 11:25:11.801329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.558 [2024-07-12 11:25:11.801344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:79896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.558 [2024-07-12 11:25:11.801358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.558 [2024-07-12 11:25:11.801373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:79904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.558 [2024-07-12 11:25:11.801387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.558 [2024-07-12 11:25:11.801403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:79912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.558 [2024-07-12 11:25:11.801417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.558 [2024-07-12 11:25:11.801433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:79920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.558 [2024-07-12 11:25:11.801447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.558 [2024-07-12 11:25:11.801463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:79928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.558 [2024-07-12 11:25:11.801476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.558 [2024-07-12 11:25:11.801492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:79936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.558 [2024-07-12 11:25:11.801507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.558 [2024-07-12 11:25:11.801522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:79944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.558 [2024-07-12 11:25:11.801536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.558 [2024-07-12 11:25:11.801550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:79952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.558 [2024-07-12 11:25:11.801565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.558 [2024-07-12 11:25:11.801580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:79960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.558 [2024-07-12 11:25:11.801594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.558 [2024-07-12 11:25:11.801609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.558 [2024-07-12 11:25:11.801627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.558 [2024-07-12 11:25:11.801655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.558 [2024-07-12 11:25:11.801670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.558 [2024-07-12 11:25:11.801685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.558 [2024-07-12 11:25:11.801699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.558 [2024-07-12 11:25:11.801714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.558 [2024-07-12 11:25:11.801728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.558 [2024-07-12 11:25:11.801743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.558 [2024-07-12 11:25:11.801756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.558 [2024-07-12 11:25:11.801771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.558 [2024-07-12 11:25:11.801786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.558 [2024-07-12 11:25:11.801801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.558 [2024-07-12 11:25:11.801814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.558 [2024-07-12 11:25:11.801829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.558 [2024-07-12 11:25:11.801842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.558 [2024-07-12 11:25:11.801857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.558 [2024-07-12 11:25:11.801893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.558 [2024-07-12 11:25:11.801911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:79984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.558 [2024-07-12 11:25:11.801926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.558 [2024-07-12 11:25:11.801942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.558 [2024-07-12 11:25:11.801956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.558 [2024-07-12 11:25:11.801972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:80000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.558 [2024-07-12 11:25:11.801986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.558 [2024-07-12 11:25:11.802002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.558 [2024-07-12 11:25:11.802015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.558 [2024-07-12 11:25:11.802032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:80016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.558 [2024-07-12 11:25:11.802050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.558 [2024-07-12 11:25:11.802066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:80024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.558 [2024-07-12 11:25:11.802080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.558 [2024-07-12 11:25:11.802095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:80032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.558 [2024-07-12 11:25:11.802110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.558 [2024-07-12 11:25:11.802124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a8390 is same with the state(5) to be set 00:21:00.558 [2024-07-12 11:25:11.802141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:00.558 [2024-07-12 11:25:11.802165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:00.558 [2024-07-12 11:25:11.802177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80040 len:8 PRP1 0x0 PRP2 0x0 00:21:00.558 [2024-07-12 11:25:11.802205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.558 [2024-07-12 11:25:11.802269] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8a8390 was disconnected and freed. reset controller. 00:21:00.558 [2024-07-12 11:25:11.802287] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:00.558 [2024-07-12 11:25:11.802320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:00.558 [2024-07-12 11:25:11.802353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.558 [2024-07-12 11:25:11.802369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:00.558 [2024-07-12 11:25:11.802383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.558 [2024-07-12 11:25:11.802397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:00.558 [2024-07-12 11:25:11.802411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.558 [2024-07-12 11:25:11.802425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:00.558 [2024-07-12 11:25:11.802438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.558 [2024-07-12 11:25:11.802452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:00.558 [2024-07-12 11:25:11.805750] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:00.558 [2024-07-12 11:25:11.805788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8820f0 (9): Bad file descriptor 00:21:00.558 [2024-07-12 11:25:11.887182] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:00.558 [2024-07-12 11:25:15.515743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.558 [2024-07-12 11:25:15.515785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.558 [2024-07-12 11:25:15.515814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.558 [2024-07-12 11:25:15.515835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.558 [2024-07-12 11:25:15.515851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.558 [2024-07-12 11:25:15.515871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.558 [2024-07-12 11:25:15.515904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.558 [2024-07-12 11:25:15.515918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.558 [2024-07-12 11:25:15.515933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.558 [2024-07-12 11:25:15.515962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.558 [2024-07-12 11:25:15.515978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.558 [2024-07-12 11:25:15.515992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.558 [2024-07-12 11:25:15.516007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.558 [2024-07-12 11:25:15.516022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.558 [2024-07-12 11:25:15.516037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.559 [2024-07-12 11:25:15.516051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.559 [2024-07-12 11:25:15.516067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.559 [2024-07-12 11:25:15.516081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.559 [2024-07-12 11:25:15.516097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.559 [2024-07-12 11:25:15.516111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.559 [2024-07-12 11:25:15.516127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.559 [2024-07-12 11:25:15.516140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.559 [2024-07-12 11:25:15.516155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.559 [2024-07-12 11:25:15.516169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.559 [2024-07-12 11:25:15.516201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.559 [2024-07-12 11:25:15.516214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.559 [2024-07-12 11:25:15.516229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.559 [2024-07-12 11:25:15.516243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.559 [2024-07-12 11:25:15.516281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.559 [2024-07-12 11:25:15.516295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.559 [2024-07-12 11:25:15.516310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.559 [2024-07-12 11:25:15.516323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.559 [2024-07-12 11:25:15.516338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.559 [2024-07-12 11:25:15.516353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.559 [2024-07-12 11:25:15.516367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.559 [2024-07-12 11:25:15.516380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.559 [2024-07-12 11:25:15.516395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.559 [2024-07-12 11:25:15.516408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.559 [2024-07-12 11:25:15.516423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.559 [2024-07-12 11:25:15.516436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.559 [2024-07-12 11:25:15.516450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.559 [2024-07-12 11:25:15.516463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.559 [2024-07-12 11:25:15.516478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.559 [2024-07-12 11:25:15.516490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.559 [2024-07-12 11:25:15.516505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.559 [2024-07-12 11:25:15.516519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.559 [2024-07-12 11:25:15.516533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.559 [2024-07-12 11:25:15.516547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.559 [2024-07-12 11:25:15.516561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.559 [2024-07-12 11:25:15.516574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.559 [2024-07-12 11:25:15.516589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.559 [2024-07-12 11:25:15.516602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.559 [2024-07-12 11:25:15.516616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.559 [2024-07-12 11:25:15.516633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.559 [2024-07-12 11:25:15.516648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.559 [2024-07-12 11:25:15.516661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.559 [2024-07-12 11:25:15.516675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.559 [2024-07-12 11:25:15.516689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.559 [2024-07-12 11:25:15.516703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.559 [2024-07-12 11:25:15.516716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.559 [2024-07-12 11:25:15.516731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.559 [2024-07-12 11:25:15.516744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.559 [2024-07-12 11:25:15.516759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.559 [2024-07-12 11:25:15.516772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.559 [2024-07-12 11:25:15.516786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.559 [2024-07-12 11:25:15.516800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.559 [2024-07-12 11:25:15.516815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.559 [2024-07-12 11:25:15.516828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.559 [2024-07-12 11:25:15.516843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.559 [2024-07-12 11:25:15.516857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.559 [2024-07-12 11:25:15.516895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.559 [2024-07-12 11:25:15.516911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.559 [2024-07-12 11:25:15.516926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.559 [2024-07-12 11:25:15.516940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.559 [2024-07-12 11:25:15.516955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.559 [2024-07-12 11:25:15.516969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.559 [2024-07-12 11:25:15.516984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.559 [2024-07-12 11:25:15.516997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.559 [2024-07-12 11:25:15.517016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.559 [2024-07-12 11:25:15.517030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.559 [2024-07-12 11:25:15.517045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.559 [2024-07-12 11:25:15.517058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.559 [2024-07-12 11:25:15.517073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.559 [2024-07-12 11:25:15.517087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.559 [2024-07-12 11:25:15.517102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.560 [2024-07-12 11:25:15.517115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.517131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.560 [2024-07-12 11:25:15.517144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.517159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.560 [2024-07-12 11:25:15.517187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.517203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.560 [2024-07-12 11:25:15.517216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.517231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.560 [2024-07-12 11:25:15.517244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.517258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.560 [2024-07-12 11:25:15.517271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.517286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.560 [2024-07-12 11:25:15.517299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.517314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.560 [2024-07-12 11:25:15.517327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.517342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.560 [2024-07-12 11:25:15.517355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.517370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.560 [2024-07-12 11:25:15.517384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.517401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.560 [2024-07-12 11:25:15.517416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.517430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.560 [2024-07-12 11:25:15.517444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.517459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.560 [2024-07-12 11:25:15.517472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.517487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.560 [2024-07-12 11:25:15.517500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.517514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.560 [2024-07-12 11:25:15.517531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.517547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.560 [2024-07-12 11:25:15.517561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.517575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.560 [2024-07-12 11:25:15.517589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.517604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.560 [2024-07-12 11:25:15.517618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.517632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.560 [2024-07-12 11:25:15.517645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.517661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.560 [2024-07-12 11:25:15.517674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.517689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.560 [2024-07-12 11:25:15.517704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.517719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.560 [2024-07-12 11:25:15.517733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.517748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.560 [2024-07-12 11:25:15.517765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.517781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.560 [2024-07-12 11:25:15.517795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.517810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.560 [2024-07-12 11:25:15.517823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.517838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.560 [2024-07-12 11:25:15.517874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.517893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.560 [2024-07-12 11:25:15.517908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.517939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.560 [2024-07-12 11:25:15.517953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.517969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.560 [2024-07-12 11:25:15.517984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.518000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.560 [2024-07-12 11:25:15.518014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.518030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.560 [2024-07-12 11:25:15.518045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.518061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.560 [2024-07-12 11:25:15.518076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.518092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.560 [2024-07-12 11:25:15.518107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.518123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.560 [2024-07-12 11:25:15.518137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.518153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.560 [2024-07-12 11:25:15.518167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.518187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.560 [2024-07-12 11:25:15.518202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.518235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.560 [2024-07-12 11:25:15.518250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.518266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.560 [2024-07-12 11:25:15.518282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.518299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.560 [2024-07-12 11:25:15.518313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.518328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.560 [2024-07-12 11:25:15.518344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.518362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.560 [2024-07-12 11:25:15.518377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.518392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.560 [2024-07-12 11:25:15.518407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.518423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.560 [2024-07-12 11:25:15.518438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.560 [2024-07-12 11:25:15.518455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.561 [2024-07-12 11:25:15.518470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.561 [2024-07-12 11:25:15.518487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.561 [2024-07-12 11:25:15.518501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.561 [2024-07-12 11:25:15.518516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.561 [2024-07-12 11:25:15.518531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.561 [2024-07-12 11:25:15.518547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.561 [2024-07-12 11:25:15.518561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.561 [2024-07-12 11:25:15.518576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.561 [2024-07-12 11:25:15.518593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.561 [2024-07-12 11:25:15.518610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.561 [2024-07-12 11:25:15.518624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.561 [2024-07-12 11:25:15.518638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.561 [2024-07-12 11:25:15.518652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.561 [2024-07-12 11:25:15.518667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.561 [2024-07-12 11:25:15.518680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.561 [2024-07-12 11:25:15.518696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.561 [2024-07-12 11:25:15.518710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.561 [2024-07-12 11:25:15.518725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.561 [2024-07-12 11:25:15.518738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.561 [2024-07-12 11:25:15.518752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.561 [2024-07-12 11:25:15.518773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.561 [2024-07-12 11:25:15.518788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.561 [2024-07-12 11:25:15.518802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.561 [2024-07-12 11:25:15.518817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.561 [2024-07-12 11:25:15.518830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.561 [2024-07-12 11:25:15.518845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.561 [2024-07-12 11:25:15.518859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.561 [2024-07-12 11:25:15.518898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.561 [2024-07-12 11:25:15.518913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.561 [2024-07-12 11:25:15.518929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.561 [2024-07-12 11:25:15.518943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.561 [2024-07-12 11:25:15.518959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.561 [2024-07-12 11:25:15.518973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.561 [2024-07-12 11:25:15.518988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.561 [2024-07-12 11:25:15.519006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.561 [2024-07-12 11:25:15.519022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.561 [2024-07-12 11:25:15.519038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.561 [2024-07-12 11:25:15.519054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.561 [2024-07-12 11:25:15.519069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.561 [2024-07-12 11:25:15.519085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.561 [2024-07-12 11:25:15.519099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.561 [2024-07-12 11:25:15.519115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.561 [2024-07-12 11:25:15.519131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.561 [2024-07-12 11:25:15.519147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.561 [2024-07-12 11:25:15.519161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.561 [2024-07-12 11:25:15.519192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.561 [2024-07-12 11:25:15.519206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.561 [2024-07-12 11:25:15.519223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.561 [2024-07-12 11:25:15.519237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.561 [2024-07-12 11:25:15.519252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.561 [2024-07-12 11:25:15.519265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.561 [2024-07-12 11:25:15.519281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.561 [2024-07-12 11:25:15.519301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.561 [2024-07-12 11:25:15.519317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.561 [2024-07-12 11:25:15.519331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.561 [2024-07-12 11:25:15.519366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:00.561 [2024-07-12 11:25:15.519384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97568 len:8 PRP1 0x0 PRP2 0x0 00:21:00.561 [2024-07-12 11:25:15.519397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.561 [2024-07-12 11:25:15.519414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:00.561 [2024-07-12 11:25:15.519426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:00.561 [2024-07-12 11:25:15.519442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97576 len:8 PRP1 0x0 PRP2 0x0 00:21:00.561 [2024-07-12 11:25:15.519456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.561 [2024-07-12 11:25:15.519469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:00.561 [2024-07-12 11:25:15.519480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:00.561 [2024-07-12 11:25:15.519490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97584 len:8 PRP1 0x0 PRP2 0x0 00:21:00.561 [2024-07-12 11:25:15.519503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.561 [2024-07-12 11:25:15.519516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:00.561 [2024-07-12 11:25:15.519527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:00.561 [2024-07-12 11:25:15.519538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97592 len:8 PRP1 0x0 PRP2 0x0 00:21:00.561 [2024-07-12 11:25:15.519551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.561 [2024-07-12 11:25:15.519564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:00.561 [2024-07-12 11:25:15.519575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:00.561 [2024-07-12 11:25:15.519586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97600 len:8 PRP1 0x0 PRP2 0x0 00:21:00.561 [2024-07-12 11:25:15.519599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.561 [2024-07-12 11:25:15.519613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:00.561 [2024-07-12 11:25:15.519624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:00.561 [2024-07-12 11:25:15.519636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97608 len:8 PRP1 0x0 PRP2 0x0 00:21:00.561 [2024-07-12 11:25:15.519648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.561 [2024-07-12 11:25:15.519661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:00.561 [2024-07-12 11:25:15.519673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:00.561 [2024-07-12 11:25:15.519684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97616 len:8 PRP1 0x0 PRP2 0x0 00:21:00.561 [2024-07-12 11:25:15.519696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.561 [2024-07-12 11:25:15.519717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:00.561 [2024-07-12 11:25:15.519729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:00.561 [2024-07-12 11:25:15.519744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97624 len:8 PRP1 0x0 PRP2 0x0 00:21:00.561 [2024-07-12 11:25:15.519758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.561 [2024-07-12 11:25:15.519771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:00.561 [2024-07-12 11:25:15.519782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:00.561 [2024-07-12 11:25:15.519793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97632 len:8 PRP1 0x0 PRP2 0x0 00:21:00.561 [2024-07-12 11:25:15.519805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.561 [2024-07-12 11:25:15.519822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:00.562 [2024-07-12 11:25:15.519833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:00.562 [2024-07-12 11:25:15.519844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97640 len:8 PRP1 0x0 PRP2 0x0 00:21:00.562 [2024-07-12 11:25:15.519882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.562 [2024-07-12 11:25:15.519897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:00.562 [2024-07-12 11:25:15.519908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:00.562 [2024-07-12 11:25:15.519919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97648 len:8 PRP1 0x0 PRP2 0x0 00:21:00.562 [2024-07-12 11:25:15.519932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.562 [2024-07-12 11:25:15.519946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:00.562 [2024-07-12 11:25:15.519957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:00.562 [2024-07-12 11:25:15.519968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97656 len:8 PRP1 0x0 PRP2 0x0 00:21:00.562 [2024-07-12 11:25:15.519981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.562 [2024-07-12 11:25:15.519994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:00.562 [2024-07-12 11:25:15.520005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:00.562 [2024-07-12 11:25:15.520016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97664 len:8 PRP1 0x0 PRP2 0x0 00:21:00.562 [2024-07-12 11:25:15.520029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.562 [2024-07-12 11:25:15.520042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:00.562 [2024-07-12 11:25:15.520053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:00.562 [2024-07-12 11:25:15.520064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97672 len:8 PRP1 0x0 PRP2 0x0 00:21:00.562 [2024-07-12 11:25:15.520076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.562 [2024-07-12 11:25:15.520090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:00.562 [2024-07-12 11:25:15.520101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:00.562 [2024-07-12 11:25:15.520112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97032 len:8 PRP1 0x0 PRP2 0x0 00:21:00.562 [2024-07-12 11:25:15.520124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.562 [2024-07-12 11:25:15.520209] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa4cd80 was disconnected and freed. reset controller. 00:21:00.562 [2024-07-12 11:25:15.520229] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:00.562 [2024-07-12 11:25:15.520282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:00.562 [2024-07-12 11:25:15.520301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.562 [2024-07-12 11:25:15.520317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:00.562 [2024-07-12 11:25:15.520330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.562 [2024-07-12 11:25:15.520348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:00.562 [2024-07-12 11:25:15.520362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.562 [2024-07-12 11:25:15.520376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:00.562 [2024-07-12 11:25:15.520390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.562 [2024-07-12 11:25:15.520403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:00.562 [2024-07-12 11:25:15.520445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8820f0 (9): Bad file descriptor 00:21:00.562 [2024-07-12 11:25:15.523714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:00.562 [2024-07-12 11:25:15.689020] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:00.562 [2024-07-12 11:25:20.072373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.562 [2024-07-12 11:25:20.072414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.562 [2024-07-12 11:25:20.072443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:64848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.562 [2024-07-12 11:25:20.072460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.562 [2024-07-12 11:25:20.072476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:64856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.562 [2024-07-12 11:25:20.072491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.562 [2024-07-12 11:25:20.072507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.562 [2024-07-12 11:25:20.072521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.562 [2024-07-12 11:25:20.072537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.562 [2024-07-12 11:25:20.072551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.562 [2024-07-12 11:25:20.072567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.562 [2024-07-12 11:25:20.072581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.562 [2024-07-12 11:25:20.072597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.562 [2024-07-12 11:25:20.072626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.562 [2024-07-12 11:25:20.072642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.562 [2024-07-12 11:25:20.072655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.562 [2024-07-12 11:25:20.072670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.562 [2024-07-12 11:25:20.072684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.562 [2024-07-12 11:25:20.072705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.562 [2024-07-12 11:25:20.072720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.562 [2024-07-12 11:25:20.072735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.562 [2024-07-12 11:25:20.072749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.562 [2024-07-12 11:25:20.072764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.562 [2024-07-12 11:25:20.072777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.562 [2024-07-12 11:25:20.072792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.562 [2024-07-12 11:25:20.072806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.562 [2024-07-12 11:25:20.072821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.562 [2024-07-12 11:25:20.072834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.562 [2024-07-12 11:25:20.072849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.562 [2024-07-12 11:25:20.072862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.562 [2024-07-12 11:25:20.072902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.562 [2024-07-12 11:25:20.072918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.562 [2024-07-12 11:25:20.072934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.562 [2024-07-12 11:25:20.072948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.562 [2024-07-12 11:25:20.072964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.562 [2024-07-12 11:25:20.072978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.562 [2024-07-12 11:25:20.072993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.562 [2024-07-12 11:25:20.073007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.562 [2024-07-12 11:25:20.073022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.562 [2024-07-12 11:25:20.073036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.562 [2024-07-12 11:25:20.073051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.562 [2024-07-12 11:25:20.073065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.562 [2024-07-12 11:25:20.073080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.562 [2024-07-12 11:25:20.073094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.562 [2024-07-12 11:25:20.073114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.562 [2024-07-12 11:25:20.073128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.562 [2024-07-12 11:25:20.073155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.562 [2024-07-12 11:25:20.073185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.562 [2024-07-12 11:25:20.073200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.562 [2024-07-12 11:25:20.073217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.562 [2024-07-12 11:25:20.073247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.562 [2024-07-12 11:25:20.073261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.562 [2024-07-12 11:25:20.073275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.563 [2024-07-12 11:25:20.073288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.563 [2024-07-12 11:25:20.073303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.563 [2024-07-12 11:25:20.073315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.563 [2024-07-12 11:25:20.073330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.563 [2024-07-12 11:25:20.073343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.563 [2024-07-12 11:25:20.073357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.563 [2024-07-12 11:25:20.073370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.563 [2024-07-12 11:25:20.073384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.563 [2024-07-12 11:25:20.073397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.563 [2024-07-12 11:25:20.073412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.563 [2024-07-12 11:25:20.073425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.563 [2024-07-12 11:25:20.073439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.563 [2024-07-12 11:25:20.073453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.563 [2024-07-12 11:25:20.073467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.563 [2024-07-12 11:25:20.073480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.563 [2024-07-12 11:25:20.073495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.563 [2024-07-12 11:25:20.073512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.563 [2024-07-12 11:25:20.073527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.563 [2024-07-12 11:25:20.073542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.563 [2024-07-12 11:25:20.073557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.563 [2024-07-12 11:25:20.073571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.563 [2024-07-12 11:25:20.073586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.563 [2024-07-12 11:25:20.073601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.563 [2024-07-12 11:25:20.073616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.563 [2024-07-12 11:25:20.073631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.563 [2024-07-12 11:25:20.073647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.563 [2024-07-12 11:25:20.073662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.563 [2024-07-12 11:25:20.073677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.563 [2024-07-12 11:25:20.073691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.563 [2024-07-12 11:25:20.073705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.563 [2024-07-12 11:25:20.073719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.563 [2024-07-12 11:25:20.073733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.563 [2024-07-12 11:25:20.073752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.563 [2024-07-12 11:25:20.073767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.563 [2024-07-12 11:25:20.073780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.563 [2024-07-12 11:25:20.073795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.563 [2024-07-12 11:25:20.073808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.563 [2024-07-12 11:25:20.073823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.563 [2024-07-12 11:25:20.073836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.563 [2024-07-12 11:25:20.073851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.563 [2024-07-12 11:25:20.073887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.563 [2024-07-12 11:25:20.073910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.563 [2024-07-12 11:25:20.073930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.563 [2024-07-12 11:25:20.073946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.563 [2024-07-12 11:25:20.073959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.563 [2024-07-12 11:25:20.073975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.563 [2024-07-12 11:25:20.073988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.563 [2024-07-12 11:25:20.074003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.563 [2024-07-12 11:25:20.074016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.563 [2024-07-12 11:25:20.074031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.563 [2024-07-12 11:25:20.074044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.563 [2024-07-12 11:25:20.074059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.563 [2024-07-12 11:25:20.074072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.563 [2024-07-12 11:25:20.074087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.563 [2024-07-12 11:25:20.074101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.563 [2024-07-12 11:25:20.074115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.563 [2024-07-12 11:25:20.074129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.563 [2024-07-12 11:25:20.074143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.563 [2024-07-12 11:25:20.074157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.563 [2024-07-12 11:25:20.074187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.563 [2024-07-12 11:25:20.074201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.563 [2024-07-12 11:25:20.074215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.563 [2024-07-12 11:25:20.074228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.563 [2024-07-12 11:25:20.074243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.563 [2024-07-12 11:25:20.074261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.563 [2024-07-12 11:25:20.074276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.563 [2024-07-12 11:25:20.074289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.563 [2024-07-12 11:25:20.074310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.563 [2024-07-12 11:25:20.074323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.563 [2024-07-12 11:25:20.074338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.074352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.074381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.074395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.074410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.074429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.074445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.074458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.074473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.074487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.074502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.074516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.074531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.074544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.074559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.074572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.074588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.074601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.074616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.074629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.074644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.074658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.074673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.074690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.074705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.074719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.074734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.074752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.074768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.074782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.074797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.074810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.074825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.074839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.074854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.074888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.074906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.074926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.074942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.074956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.074972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.074985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.075001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.075014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.075030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.075044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.075059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.075073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.075092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.075106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.075122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.075135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.075151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.075165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.075196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.075210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.075224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.075238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.075253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.075268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.075284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.075297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.075312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.075326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.075341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.075354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.075369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.075382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.075397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.075412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.075427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.075441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.075456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.075473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.075489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.075503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.075518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.075532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.075547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.075561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.075576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.075590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.075605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.075619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.075634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.075648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.564 [2024-07-12 11:25:20.075663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.564 [2024-07-12 11:25:20.075677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.565 [2024-07-12 11:25:20.075693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.565 [2024-07-12 11:25:20.075707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.565 [2024-07-12 11:25:20.075722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.565 [2024-07-12 11:25:20.075736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.565 [2024-07-12 11:25:20.075751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.565 [2024-07-12 11:25:20.075764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.565 [2024-07-12 11:25:20.075780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.565 [2024-07-12 11:25:20.075793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.565 [2024-07-12 11:25:20.075808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.565 [2024-07-12 11:25:20.075822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.565 [2024-07-12 11:25:20.075837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.565 [2024-07-12 11:25:20.075878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.565 [2024-07-12 11:25:20.075897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.565 [2024-07-12 11:25:20.075912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.565 [2024-07-12 11:25:20.075928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:00.565 [2024-07-12 11:25:20.075942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.565 [2024-07-12 11:25:20.075957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.565 [2024-07-12 11:25:20.075971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.565 [2024-07-12 11:25:20.075987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:64888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.565 [2024-07-12 11:25:20.076001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.565 [2024-07-12 11:25:20.076016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.565 [2024-07-12 11:25:20.076030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.565 [2024-07-12 11:25:20.076045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.565 [2024-07-12 11:25:20.076060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.565 [2024-07-12 11:25:20.076075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.565 [2024-07-12 11:25:20.076089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.565 [2024-07-12 11:25:20.076104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.565 [2024-07-12 11:25:20.076118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.565 [2024-07-12 11:25:20.076134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:64928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.565 [2024-07-12 11:25:20.076147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.565 [2024-07-12 11:25:20.076162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.565 [2024-07-12 11:25:20.076191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.565 [2024-07-12 11:25:20.076207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.565 [2024-07-12 11:25:20.076220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.565 [2024-07-12 11:25:20.076235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.565 [2024-07-12 11:25:20.076249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.565 [2024-07-12 11:25:20.076268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:64960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.565 [2024-07-12 11:25:20.076282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.565 [2024-07-12 11:25:20.076298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.565 [2024-07-12 11:25:20.076311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.565 [2024-07-12 11:25:20.076326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.565 [2024-07-12 11:25:20.076340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.565 [2024-07-12 11:25:20.076354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.565 [2024-07-12 11:25:20.076368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.565 [2024-07-12 11:25:20.076397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:00.565 [2024-07-12 11:25:20.076413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:00.565 [2024-07-12 11:25:20.076425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64992 len:8 PRP1 0x0 PRP2 0x0 00:21:00.565 [2024-07-12 11:25:20.076438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.565 [2024-07-12 11:25:20.076497] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa4cb70 was disconnected and freed. reset controller. 00:21:00.565 [2024-07-12 11:25:20.076514] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:00.565 [2024-07-12 11:25:20.076546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:00.565 [2024-07-12 11:25:20.076578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.565 [2024-07-12 11:25:20.076594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:00.565 [2024-07-12 11:25:20.076607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.565 [2024-07-12 11:25:20.076621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:00.565 [2024-07-12 11:25:20.076634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.565 [2024-07-12 11:25:20.076648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:00.565 [2024-07-12 11:25:20.076662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:00.565 [2024-07-12 11:25:20.076677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:00.565 [2024-07-12 11:25:20.079934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:00.565 [2024-07-12 11:25:20.079974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8820f0 (9): Bad file descriptor 00:21:00.565 [2024-07-12 11:25:20.163833] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:00.565 00:21:00.565 Latency(us) 00:21:00.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.565 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:00.565 Verification LBA range: start 0x0 length 0x4000 00:21:00.565 NVMe0n1 : 15.01 8586.77 33.54 856.84 0.00 13524.91 579.51 16117.00 00:21:00.565 =================================================================================================================== 00:21:00.565 Total : 8586.77 33.54 856.84 0.00 13524.91 579.51 16117.00 00:21:00.565 Received shutdown signal, test time was about 15.000000 seconds 00:21:00.565 00:21:00.565 Latency(us) 00:21:00.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.565 =================================================================================================================== 00:21:00.565 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:00.565 11:25:25 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:00.565 11:25:25 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:21:00.565 11:25:25 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:21:00.565 11:25:25 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=641866 00:21:00.565 11:25:25 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:00.565 11:25:25 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 641866 /var/tmp/bdevperf.sock 00:21:00.565 11:25:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 641866 ']' 00:21:00.565 11:25:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:00.565 11:25:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:00.565 11:25:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:00.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:00.565 11:25:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:00.565 11:25:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:00.565 11:25:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:00.565 11:25:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:00.565 11:25:26 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:00.565 [2024-07-12 11:25:26.603441] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:00.565 11:25:26 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:00.823 [2024-07-12 11:25:26.840061] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:00.823 11:25:26 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:01.387 NVMe0n1 00:21:01.387 11:25:27 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:01.645 00:21:01.645 11:25:27 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:02.210 00:21:02.210 11:25:28 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:02.210 11:25:28 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:21:02.210 11:25:28 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:02.468 11:25:28 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:21:05.741 11:25:31 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:05.741 11:25:31 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:21:05.741 11:25:31 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=642601 00:21:05.741 11:25:31 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:05.741 11:25:31 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 642601 00:21:07.112 0 00:21:07.112 11:25:32 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:07.112 [2024-07-12 11:25:26.033881] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:21:07.112 [2024-07-12 11:25:26.033980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid641866 ] 00:21:07.112 EAL: No free 2048 kB hugepages reported on node 1 00:21:07.112 [2024-07-12 11:25:26.092925] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.112 [2024-07-12 11:25:26.199998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.112 [2024-07-12 11:25:28.550417] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:07.112 [2024-07-12 11:25:28.550508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.112 [2024-07-12 11:25:28.550531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.112 [2024-07-12 11:25:28.550547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.112 [2024-07-12 11:25:28.550561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.112 [2024-07-12 11:25:28.550575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.112 [2024-07-12 11:25:28.550589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.112 [2024-07-12 11:25:28.550611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.112 [2024-07-12 11:25:28.550626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.112 [2024-07-12 11:25:28.550640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:07.112 [2024-07-12 11:25:28.550684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:07.112 [2024-07-12 11:25:28.550715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16140f0 (9): Bad file descriptor 00:21:07.112 [2024-07-12 11:25:28.603182] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:07.112 Running I/O for 1 seconds... 00:21:07.112 00:21:07.112 Latency(us) 00:21:07.112 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.112 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:07.112 Verification LBA range: start 0x0 length 0x4000 00:21:07.112 NVMe0n1 : 1.01 8755.42 34.20 0.00 0.00 14545.86 1650.54 13592.65 00:21:07.112 =================================================================================================================== 00:21:07.112 Total : 8755.42 34.20 0.00 0.00 14545.86 1650.54 13592.65 00:21:07.112 11:25:32 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:07.112 11:25:32 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:21:07.383 11:25:33 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:07.641 11:25:33 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:07.641 11:25:33 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:21:07.898 11:25:33 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:08.155 11:25:34 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:21:11.458 11:25:37 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:11.458 11:25:37 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:21:11.458 11:25:37 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 641866 00:21:11.458 11:25:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 641866 ']' 00:21:11.458 11:25:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 641866 00:21:11.458 11:25:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:11.458 11:25:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:11.458 11:25:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 641866 00:21:11.458 11:25:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:11.458 11:25:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:11.458 11:25:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 641866' 00:21:11.458 killing process with pid 641866 00:21:11.458 11:25:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 641866 00:21:11.459 11:25:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 641866 00:21:11.459 11:25:37 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:21:11.459 11:25:37 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:11.716 11:25:37 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:11.716 11:25:37 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:11.716 11:25:37 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:21:11.716 11:25:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:11.716 11:25:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:21:11.716 11:25:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:11.716 11:25:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:21:11.716 11:25:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:11.716 11:25:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:11.716 rmmod nvme_tcp 00:21:11.716 rmmod nvme_fabrics 00:21:11.973 rmmod nvme_keyring 00:21:11.973 11:25:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:11.973 11:25:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:21:11.973 11:25:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:21:11.973 11:25:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 639802 ']' 00:21:11.973 11:25:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 639802 00:21:11.973 11:25:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 639802 ']' 00:21:11.973 11:25:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 639802 00:21:11.973 11:25:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:11.973 11:25:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:11.973 11:25:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 639802 00:21:11.973 11:25:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:11.973 11:25:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:11.973 11:25:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 639802' 00:21:11.973 killing process with pid 639802 00:21:11.973 11:25:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 639802 00:21:11.973 11:25:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 639802 00:21:12.232 11:25:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:12.232 11:25:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:12.232 11:25:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:12.232 11:25:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:12.232 11:25:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:12.232 11:25:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:12.232 11:25:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:12.232 11:25:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.136 11:25:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:14.136 00:21:14.136 real 0m35.429s 00:21:14.136 user 2m4.793s 00:21:14.136 sys 0m5.959s 00:21:14.136 11:25:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:14.136 11:25:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:14.136 ************************************ 00:21:14.136 END TEST nvmf_failover 00:21:14.136 ************************************ 00:21:14.395 11:25:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:14.395 11:25:40 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:14.395 11:25:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:14.395 11:25:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:14.395 11:25:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:14.395 ************************************ 00:21:14.395 START TEST nvmf_host_discovery 00:21:14.395 ************************************ 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:14.395 * Looking for test storage... 00:21:14.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:21:14.395 11:25:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:16.302 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:16.302 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:16.302 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:16.302 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:16.302 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:16.561 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:16.561 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:16.561 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:16.561 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:16.561 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:16.562 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:16.562 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:16.562 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:16.562 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:21:16.562 00:21:16.562 --- 10.0.0.2 ping statistics --- 00:21:16.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.562 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:21:16.562 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:16.562 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:16.562 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:21:16.562 00:21:16.562 --- 10.0.0.1 ping statistics --- 00:21:16.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.562 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:21:16.562 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:16.562 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:21:16.562 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:16.562 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:16.562 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:16.562 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:16.562 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:16.562 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:16.562 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:16.562 11:25:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:16.562 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:16.562 11:25:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:16.562 11:25:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.562 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=645187 00:21:16.562 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:16.562 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 645187 00:21:16.562 11:25:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 645187 ']' 00:21:16.562 11:25:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.562 11:25:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:16.562 11:25:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.562 11:25:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:16.562 11:25:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.562 [2024-07-12 11:25:42.578139] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:21:16.562 [2024-07-12 11:25:42.578223] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:16.562 EAL: No free 2048 kB hugepages reported on node 1 00:21:16.562 [2024-07-12 11:25:42.640616] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.820 [2024-07-12 11:25:42.746699] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:16.820 [2024-07-12 11:25:42.746769] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:16.820 [2024-07-12 11:25:42.746797] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:16.820 [2024-07-12 11:25:42.746808] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:16.820 [2024-07-12 11:25:42.746818] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:16.820 [2024-07-12 11:25:42.746849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:16.820 11:25:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:16.820 11:25:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:21:16.820 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:16.820 11:25:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:16.820 11:25:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.820 11:25:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:16.820 11:25:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:16.820 11:25:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.820 11:25:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.820 [2024-07-12 11:25:42.871389] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.820 11:25:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.820 11:25:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:16.820 11:25:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.820 11:25:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.820 [2024-07-12 11:25:42.879536] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:16.820 11:25:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.820 11:25:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:16.820 11:25:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.820 11:25:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.820 null0 00:21:16.820 11:25:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.820 11:25:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:16.820 11:25:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.820 11:25:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.820 null1 00:21:16.820 11:25:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.820 11:25:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:16.820 11:25:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.820 11:25:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.820 11:25:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.821 11:25:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=645207 00:21:16.821 11:25:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:16.821 11:25:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 645207 /tmp/host.sock 00:21:16.821 11:25:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 645207 ']' 00:21:16.821 11:25:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:21:16.821 11:25:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:16.821 11:25:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:16.821 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:16.821 11:25:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:16.821 11:25:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:16.821 [2024-07-12 11:25:42.950367] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:21:16.821 [2024-07-12 11:25:42.950448] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid645207 ] 00:21:17.078 EAL: No free 2048 kB hugepages reported on node 1 00:21:17.078 [2024-07-12 11:25:43.007490] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.078 [2024-07-12 11:25:43.113108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:21:17.336 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:17.594 [2024-07-12 11:25:43.513243] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:21:17.594 11:25:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:21:18.158 [2024-07-12 11:25:44.248327] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:18.158 [2024-07-12 11:25:44.248359] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:18.158 [2024-07-12 11:25:44.248380] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:18.415 [2024-07-12 11:25:44.334674] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:18.415 [2024-07-12 11:25:44.512644] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:18.415 [2024-07-12 11:25:44.512670] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:18.672 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:18.672 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:18.672 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:18.672 11:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:18.672 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.672 11:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:18.672 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.672 11:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:18.672 11:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:18.672 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.672 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.672 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:18.672 11:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:18.672 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:18.672 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:18.672 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:18.672 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:21:18.672 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:18.672 11:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:18.672 11:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:18.672 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.672 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.672 11:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:18.672 11:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:18.672 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.672 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:18.672 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:18.673 11:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:18.673 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:18.673 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:18.673 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:18.673 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:21:18.673 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:18.673 11:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:18.673 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.673 11:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:18.673 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.673 11:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:18.673 11:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:18.673 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.930 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:21:18.930 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:18.930 11:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:21:18.930 11:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:18.930 11:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:18.930 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:18.930 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:18.930 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:18.930 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:18.930 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:18.930 11:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:18.930 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.930 11:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:18.930 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.930 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.930 11:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:18.930 11:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:21:18.930 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:18.930 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:18.930 11:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:18.930 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.930 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.930 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.930 11:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:18.930 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:18.930 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:18.930 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:18.930 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:18.930 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:18.930 11:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:18.930 11:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:18.931 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.931 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.931 11:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:18.931 11:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:18.931 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.931 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:18.931 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:18.931 11:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:21:18.931 11:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:18.931 11:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:18.931 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:18.931 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:18.931 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:18.931 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:18.931 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:18.931 11:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:18.931 11:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:18.931 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.931 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.931 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.931 11:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:18.931 11:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:18.931 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:18.931 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:18.931 11:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:18.931 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.931 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.931 [2024-07-12 11:25:44.977653] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:18.931 [2024-07-12 11:25:44.978315] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:18.931 [2024-07-12 11:25:44.978362] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:18.931 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.931 11:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:18.931 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:18.931 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:18.931 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:18.931 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:18.931 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:18.931 11:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:18.931 11:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:18.931 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.931 11:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:18.931 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.931 11:25:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:18.931 11:25:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.931 11:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.931 11:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:18.931 11:25:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:18.931 11:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:18.931 11:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:18.931 11:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:18.931 11:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:18.931 11:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:18.931 11:25:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:18.931 11:25:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:18.931 11:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.931 11:25:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:18.931 11:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:18.931 11:25:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:18.931 11:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.188 11:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:19.188 11:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:19.188 11:25:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:19.188 11:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:19.188 11:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:19.188 11:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:19.188 11:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:19.188 11:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:19.188 11:25:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:19.188 11:25:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:19.188 11:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.188 11:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:19.188 11:25:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:19.188 11:25:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:19.188 11:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.188 [2024-07-12 11:25:45.104713] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:21:19.188 11:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:21:19.188 11:25:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:21:19.445 [2024-07-12 11:25:45.404968] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:19.445 [2024-07-12 11:25:45.404990] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:19.445 [2024-07-12 11:25:45.405000] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:20.010 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:20.010 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:20.010 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:20.010 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:20.010 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:20.010 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.010 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:20.010 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:20.010 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:20.010 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.269 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:20.269 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:20.269 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:21:20.269 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:20.269 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:20.269 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:20.269 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:20.269 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:20.269 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:20.269 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:20.269 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:20.269 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:20.269 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.269 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:20.269 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.269 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:20.269 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:20.269 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:20.269 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:20.269 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:20.269 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.269 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:20.269 [2024-07-12 11:25:46.218640] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:20.269 [2024-07-12 11:25:46.218695] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:20.269 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.269 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:20.269 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:20.270 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:20.270 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:20.270 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:20.270 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:20.270 [2024-07-12 11:25:46.224175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.270 [2024-07-12 11:25:46.224233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.270 [2024-07-12 11:25:46.224266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.270 [2024-07-12 11:25:46.224280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.270 [2024-07-12 11:25:46.224295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.270 [2024-07-12 11:25:46.224307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.270 [2024-07-12 11:25:46.224322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.270 [2024-07-12 11:25:46.224336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.270 [2024-07-12 11:25:46.224349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897c00 is same with the state(5) to be set 00:21:20.270 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:20.270 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:20.270 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.270 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:20.270 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:20.270 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:20.270 [2024-07-12 11:25:46.234181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1897c00 (9): Bad file descriptor 00:21:20.270 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.270 [2024-07-12 11:25:46.244223] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:20.270 [2024-07-12 11:25:46.244461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.270 [2024-07-12 11:25:46.244490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1897c00 with addr=10.0.0.2, port=4420 00:21:20.270 [2024-07-12 11:25:46.244507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897c00 is same with the state(5) to be set 00:21:20.270 [2024-07-12 11:25:46.244530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1897c00 (9): Bad file descriptor 00:21:20.270 [2024-07-12 11:25:46.244566] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:20.270 [2024-07-12 11:25:46.244583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:20.270 [2024-07-12 11:25:46.244599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:20.270 [2024-07-12 11:25:46.244619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:20.270 [2024-07-12 11:25:46.254311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:20.270 [2024-07-12 11:25:46.254485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.270 [2024-07-12 11:25:46.254513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1897c00 with addr=10.0.0.2, port=4420 00:21:20.270 [2024-07-12 11:25:46.254529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897c00 is same with the state(5) to be set 00:21:20.270 [2024-07-12 11:25:46.254551] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1897c00 (9): Bad file descriptor 00:21:20.270 [2024-07-12 11:25:46.254571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:20.270 [2024-07-12 11:25:46.254591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:20.270 [2024-07-12 11:25:46.254605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:20.270 [2024-07-12 11:25:46.254624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:20.270 [2024-07-12 11:25:46.264397] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:20.270 [2024-07-12 11:25:46.264546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.270 [2024-07-12 11:25:46.264574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1897c00 with addr=10.0.0.2, port=4420 00:21:20.270 [2024-07-12 11:25:46.264591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897c00 is same with the state(5) to be set 00:21:20.270 [2024-07-12 11:25:46.264613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1897c00 (9): Bad file descriptor 00:21:20.270 [2024-07-12 11:25:46.264646] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:20.270 [2024-07-12 11:25:46.264663] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:20.270 [2024-07-12 11:25:46.264677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:20.270 [2024-07-12 11:25:46.264696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:20.270 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.270 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:20.270 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:20.270 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:20.270 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:20.270 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:20.270 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:20.270 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:20.270 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:20.270 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:20.270 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.270 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:20.270 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:20.270 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:20.270 [2024-07-12 11:25:46.274485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:20.270 [2024-07-12 11:25:46.274661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.270 [2024-07-12 11:25:46.274690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1897c00 with addr=10.0.0.2, port=4420 00:21:20.270 [2024-07-12 11:25:46.274706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897c00 is same with the state(5) to be set 00:21:20.270 [2024-07-12 11:25:46.274728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1897c00 (9): Bad file descriptor 00:21:20.270 [2024-07-12 11:25:46.274748] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:20.270 [2024-07-12 11:25:46.274762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:20.270 [2024-07-12 11:25:46.274775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:20.270 [2024-07-12 11:25:46.274799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:20.270 [2024-07-12 11:25:46.284559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:20.270 [2024-07-12 11:25:46.284701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.270 [2024-07-12 11:25:46.284728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1897c00 with addr=10.0.0.2, port=4420 00:21:20.270 [2024-07-12 11:25:46.284744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897c00 is same with the state(5) to be set 00:21:20.270 [2024-07-12 11:25:46.284765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1897c00 (9): Bad file descriptor 00:21:20.270 [2024-07-12 11:25:46.284798] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:20.270 [2024-07-12 11:25:46.284816] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:20.270 [2024-07-12 11:25:46.284829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:20.270 [2024-07-12 11:25:46.284848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:20.270 [2024-07-12 11:25:46.294630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:20.270 [2024-07-12 11:25:46.294819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.270 [2024-07-12 11:25:46.294846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1897c00 with addr=10.0.0.2, port=4420 00:21:20.270 [2024-07-12 11:25:46.294861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897c00 is same with the state(5) to be set 00:21:20.270 [2024-07-12 11:25:46.294892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1897c00 (9): Bad file descriptor 00:21:20.270 [2024-07-12 11:25:46.294912] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:20.270 [2024-07-12 11:25:46.294926] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:20.270 [2024-07-12 11:25:46.294939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:20.270 [2024-07-12 11:25:46.294957] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:20.270 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.270 [2024-07-12 11:25:46.304715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:20.270 [2024-07-12 11:25:46.304889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.270 [2024-07-12 11:25:46.304918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1897c00 with addr=10.0.0.2, port=4420 00:21:20.270 [2024-07-12 11:25:46.304933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897c00 is same with the state(5) to be set 00:21:20.270 [2024-07-12 11:25:46.304955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1897c00 (9): Bad file descriptor 00:21:20.270 [2024-07-12 11:25:46.304988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:20.270 [2024-07-12 11:25:46.305005] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:20.270 [2024-07-12 11:25:46.305019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:20.270 [2024-07-12 11:25:46.305037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:20.270 [2024-07-12 11:25:46.305583] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:20.270 [2024-07-12 11:25:46.305612] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:20.270 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:20.270 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:20.271 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:20.271 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:20.271 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:20.271 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:20.271 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:21:20.271 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:20.271 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:20.271 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:20.271 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.271 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:20.271 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:20.271 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:20.271 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.271 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:21:20.271 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:20.271 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:21:20.271 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:20.271 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:20.271 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:20.271 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:20.271 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:20.271 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:20.271 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:20.271 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:20.271 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:20.271 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.271 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:20.271 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.528 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:20.528 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:20.528 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:20.528 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:20.528 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:20.528 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.528 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:20.528 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.528 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:21:20.528 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.529 11:25:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:21.902 [2024-07-12 11:25:47.607630] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:21.902 [2024-07-12 11:25:47.607658] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:21.902 [2024-07-12 11:25:47.607680] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:21.902 [2024-07-12 11:25:47.735082] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:21:21.902 [2024-07-12 11:25:47.802893] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:21.902 [2024-07-12 11:25:47.802947] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:21.902 request: 00:21:21.902 { 00:21:21.902 "name": "nvme", 00:21:21.902 "trtype": "tcp", 00:21:21.902 "traddr": "10.0.0.2", 00:21:21.902 "adrfam": "ipv4", 00:21:21.902 "trsvcid": "8009", 00:21:21.902 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:21.902 "wait_for_attach": true, 00:21:21.902 "method": "bdev_nvme_start_discovery", 00:21:21.902 "req_id": 1 00:21:21.902 } 00:21:21.902 Got JSON-RPC error response 00:21:21.902 response: 00:21:21.902 { 00:21:21.902 "code": -17, 00:21:21.902 "message": "File exists" 00:21:21.902 } 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:21.902 request: 00:21:21.902 { 00:21:21.902 "name": "nvme_second", 00:21:21.902 "trtype": "tcp", 00:21:21.902 "traddr": "10.0.0.2", 00:21:21.902 "adrfam": "ipv4", 00:21:21.902 "trsvcid": "8009", 00:21:21.902 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:21.902 "wait_for_attach": true, 00:21:21.902 "method": "bdev_nvme_start_discovery", 00:21:21.902 "req_id": 1 00:21:21.902 } 00:21:21.902 Got JSON-RPC error response 00:21:21.902 response: 00:21:21.902 { 00:21:21.902 "code": -17, 00:21:21.902 "message": "File exists" 00:21:21.902 } 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:21.902 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.903 11:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:21.903 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:21.903 11:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:21.903 11:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:21.903 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.903 11:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:21.903 11:25:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:21.903 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:21.903 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:21.903 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:21.903 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:21.903 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:21.903 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:21.903 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:21.903 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.903 11:25:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:23.273 [2024-07-12 11:25:48.998915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.273 [2024-07-12 11:25:48.998958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b2c90 with addr=10.0.0.2, port=8010 00:21:23.273 [2024-07-12 11:25:48.998986] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:23.273 [2024-07-12 11:25:48.999000] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:23.273 [2024-07-12 11:25:48.999013] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:24.207 [2024-07-12 11:25:50.001242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.207 [2024-07-12 11:25:50.001277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b2c90 with addr=10.0.0.2, port=8010 00:21:24.207 [2024-07-12 11:25:50.001298] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:24.207 [2024-07-12 11:25:50.001311] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:24.207 [2024-07-12 11:25:50.001323] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:25.141 [2024-07-12 11:25:51.003553] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:21:25.141 request: 00:21:25.141 { 00:21:25.141 "name": "nvme_second", 00:21:25.141 "trtype": "tcp", 00:21:25.141 "traddr": "10.0.0.2", 00:21:25.141 "adrfam": "ipv4", 00:21:25.141 "trsvcid": "8010", 00:21:25.141 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:25.141 "wait_for_attach": false, 00:21:25.141 "attach_timeout_ms": 3000, 00:21:25.141 "method": "bdev_nvme_start_discovery", 00:21:25.141 "req_id": 1 00:21:25.141 } 00:21:25.141 Got JSON-RPC error response 00:21:25.141 response: 00:21:25.141 { 00:21:25.141 "code": -110, 00:21:25.141 "message": "Connection timed out" 00:21:25.141 } 00:21:25.141 11:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:25.141 11:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:25.141 11:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:25.141 11:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:25.141 11:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:25.141 11:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:21:25.141 11:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:25.141 11:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:25.141 11:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.141 11:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:25.141 11:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:25.141 11:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:25.141 11:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.141 11:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:21:25.141 11:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:21:25.141 11:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 645207 00:21:25.141 11:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:21:25.141 11:25:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:25.141 11:25:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:21:25.141 11:25:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:25.141 11:25:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:21:25.141 11:25:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:25.141 11:25:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:25.141 rmmod nvme_tcp 00:21:25.141 rmmod nvme_fabrics 00:21:25.141 rmmod nvme_keyring 00:21:25.141 11:25:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:25.141 11:25:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:21:25.141 11:25:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:21:25.141 11:25:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 645187 ']' 00:21:25.141 11:25:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 645187 00:21:25.141 11:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 645187 ']' 00:21:25.141 11:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 645187 00:21:25.141 11:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:21:25.141 11:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:25.141 11:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 645187 00:21:25.141 11:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:25.141 11:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:25.141 11:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 645187' 00:21:25.141 killing process with pid 645187 00:21:25.141 11:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 645187 00:21:25.141 11:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 645187 00:21:25.399 11:25:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:25.399 11:25:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:25.399 11:25:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:25.399 11:25:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:25.399 11:25:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:25.399 11:25:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.399 11:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:25.399 11:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.931 11:25:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:27.931 00:21:27.931 real 0m13.166s 00:21:27.931 user 0m19.065s 00:21:27.931 sys 0m2.783s 00:21:27.931 11:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:27.931 11:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:27.931 ************************************ 00:21:27.931 END TEST nvmf_host_discovery 00:21:27.931 ************************************ 00:21:27.931 11:25:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:27.931 11:25:53 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:27.931 11:25:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:27.931 11:25:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:27.931 11:25:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:27.931 ************************************ 00:21:27.931 START TEST nvmf_host_multipath_status 00:21:27.931 ************************************ 00:21:27.931 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:27.931 * Looking for test storage... 00:21:27.931 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:27.931 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:27.931 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:21:27.931 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:27.931 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:27.931 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:27.931 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:27.931 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:27.931 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:27.931 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:27.931 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:27.931 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:27.931 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:27.931 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:27.931 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:27.931 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:27.931 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:27.931 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:27.931 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:27.931 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:27.931 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:27.931 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:27.931 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:27.931 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.931 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.931 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.931 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:21:27.931 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.931 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:21:27.931 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:27.931 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:27.931 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:27.931 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:27.931 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:27.931 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:27.931 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:27.932 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:27.932 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:27.932 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:27.932 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:27.932 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:21:27.932 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:27.932 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:27.932 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:21:27.932 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:27.932 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:27.932 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:27.932 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:27.932 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:27.932 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.932 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:27.932 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.932 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:27.932 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:27.932 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:21:27.932 11:25:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:29.878 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:29.878 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:29.878 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:29.878 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:29.878 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:29.878 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:29.878 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:21:29.878 00:21:29.878 --- 10.0.0.2 ping statistics --- 00:21:29.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.879 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:21:29.879 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:29.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:29.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:21:29.879 00:21:29.879 --- 10.0.0.1 ping statistics --- 00:21:29.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.879 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:21:29.879 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:29.879 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:21:29.879 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:29.879 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:29.879 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:29.879 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:29.879 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:29.879 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:29.879 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:29.879 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:21:29.879 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:29.879 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:29.879 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:29.879 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=648186 00:21:29.879 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:29.879 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 648186 00:21:29.879 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 648186 ']' 00:21:29.879 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.879 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:29.879 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.879 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:29.879 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:29.879 [2024-07-12 11:25:55.679469] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:21:29.879 [2024-07-12 11:25:55.679560] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:29.879 EAL: No free 2048 kB hugepages reported on node 1 00:21:29.879 [2024-07-12 11:25:55.743564] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:29.879 [2024-07-12 11:25:55.852950] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:29.879 [2024-07-12 11:25:55.853007] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:29.879 [2024-07-12 11:25:55.853037] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:29.879 [2024-07-12 11:25:55.853049] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:29.879 [2024-07-12 11:25:55.853059] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:29.879 [2024-07-12 11:25:55.853134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:29.879 [2024-07-12 11:25:55.853139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:29.879 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:29.879 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:21:29.879 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:29.879 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:29.879 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:29.879 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:29.879 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=648186 00:21:29.879 11:25:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:30.136 [2024-07-12 11:25:56.268339] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:30.394 11:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:30.652 Malloc0 00:21:30.652 11:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:30.910 11:25:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:31.168 11:25:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:31.426 [2024-07-12 11:25:57.405025] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:31.426 11:25:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:31.683 [2024-07-12 11:25:57.697878] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:31.683 11:25:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=648460 00:21:31.683 11:25:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:31.683 11:25:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:31.683 11:25:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 648460 /var/tmp/bdevperf.sock 00:21:31.683 11:25:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 648460 ']' 00:21:31.683 11:25:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:31.683 11:25:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:31.683 11:25:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:31.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:31.683 11:25:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:31.683 11:25:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:31.941 11:25:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:31.941 11:25:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:21:31.942 11:25:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:32.507 11:25:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:21:32.765 Nvme0n1 00:21:32.765 11:25:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:33.330 Nvme0n1 00:21:33.330 11:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:21:33.330 11:25:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:35.230 11:26:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:21:35.230 11:26:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:21:35.485 11:26:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:35.743 11:26:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:21:36.675 11:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:21:36.675 11:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:36.675 11:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:36.675 11:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:36.932 11:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:36.932 11:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:36.932 11:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:36.932 11:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:37.189 11:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:37.189 11:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:37.189 11:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:37.189 11:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:37.447 11:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:37.447 11:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:37.447 11:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:37.447 11:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:37.704 11:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:37.704 11:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:37.704 11:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:37.704 11:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:37.960 11:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:37.960 11:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:37.960 11:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:37.960 11:26:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:38.217 11:26:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:38.217 11:26:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:21:38.217 11:26:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:38.474 11:26:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:38.732 11:26:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:21:39.698 11:26:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:21:39.698 11:26:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:39.698 11:26:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:39.698 11:26:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:39.954 11:26:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:39.954 11:26:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:39.954 11:26:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:39.954 11:26:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:40.210 11:26:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:40.210 11:26:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:40.210 11:26:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:40.210 11:26:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:40.466 11:26:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:40.466 11:26:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:40.466 11:26:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:40.466 11:26:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:40.724 11:26:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:40.724 11:26:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:40.724 11:26:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:40.724 11:26:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:40.981 11:26:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:40.981 11:26:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:40.982 11:26:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:40.982 11:26:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:41.238 11:26:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:41.238 11:26:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:21:41.238 11:26:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:41.496 11:26:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:21:41.753 11:26:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:21:42.685 11:26:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:21:42.685 11:26:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:42.685 11:26:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:42.685 11:26:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:42.943 11:26:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:42.943 11:26:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:42.943 11:26:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:42.943 11:26:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:43.201 11:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:43.201 11:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:43.201 11:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:43.201 11:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:43.459 11:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:43.459 11:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:43.459 11:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:43.459 11:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:43.717 11:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:43.717 11:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:43.717 11:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:43.717 11:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:43.975 11:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:43.975 11:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:43.975 11:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:43.975 11:26:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:44.234 11:26:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:44.234 11:26:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:21:44.234 11:26:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:44.492 11:26:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:44.750 11:26:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:21:45.686 11:26:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:21:45.686 11:26:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:45.686 11:26:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:45.686 11:26:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:45.944 11:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:45.944 11:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:45.944 11:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:45.944 11:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:46.202 11:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:46.202 11:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:46.202 11:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:46.202 11:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:46.474 11:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:46.474 11:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:46.474 11:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:46.474 11:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:46.734 11:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:46.734 11:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:46.734 11:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:46.734 11:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:46.992 11:26:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:46.992 11:26:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:46.992 11:26:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:46.992 11:26:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:47.250 11:26:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:47.250 11:26:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:21:47.250 11:26:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:47.507 11:26:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:47.764 11:26:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:21:48.695 11:26:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:21:48.695 11:26:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:48.695 11:26:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:48.695 11:26:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:48.952 11:26:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:48.952 11:26:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:48.952 11:26:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:48.952 11:26:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:49.213 11:26:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:49.213 11:26:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:49.213 11:26:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:49.213 11:26:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:49.470 11:26:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:49.470 11:26:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:49.470 11:26:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:49.470 11:26:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:49.728 11:26:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:49.728 11:26:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:21:49.728 11:26:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:49.728 11:26:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:49.986 11:26:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:49.986 11:26:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:49.986 11:26:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:49.986 11:26:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:50.243 11:26:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:50.243 11:26:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:21:50.243 11:26:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:50.501 11:26:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:50.759 11:26:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:21:51.694 11:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:21:51.694 11:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:51.694 11:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:51.694 11:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:51.952 11:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:51.952 11:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:51.952 11:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:51.952 11:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:52.210 11:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:52.210 11:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:52.210 11:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:52.210 11:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:52.468 11:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:52.468 11:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:52.468 11:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:52.468 11:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:52.726 11:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:52.726 11:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:21:52.726 11:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:52.726 11:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:52.985 11:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:52.985 11:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:52.985 11:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:52.985 11:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:53.243 11:26:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:53.243 11:26:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:21:53.501 11:26:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:21:53.501 11:26:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:21:53.759 11:26:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:54.017 11:26:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:21:54.951 11:26:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:21:54.951 11:26:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:54.951 11:26:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:54.951 11:26:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:55.209 11:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:55.209 11:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:55.209 11:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:55.209 11:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:55.467 11:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:55.467 11:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:55.467 11:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:55.467 11:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:55.725 11:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:55.725 11:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:55.725 11:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:55.725 11:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:55.983 11:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:55.983 11:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:55.983 11:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:55.983 11:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:56.241 11:26:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:56.241 11:26:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:56.241 11:26:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:56.241 11:26:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:56.499 11:26:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:56.499 11:26:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:21:56.499 11:26:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:56.757 11:26:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:57.016 11:26:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:21:57.948 11:26:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:21:57.948 11:26:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:57.948 11:26:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:57.948 11:26:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:58.206 11:26:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:58.206 11:26:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:58.206 11:26:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:58.206 11:26:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:58.465 11:26:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:58.465 11:26:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:58.465 11:26:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:58.465 11:26:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:58.723 11:26:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:58.723 11:26:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:58.723 11:26:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:58.723 11:26:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:58.981 11:26:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:58.981 11:26:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:58.981 11:26:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:58.981 11:26:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:59.239 11:26:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:59.239 11:26:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:59.239 11:26:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:59.239 11:26:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:59.496 11:26:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:59.496 11:26:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:21:59.496 11:26:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:59.754 11:26:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:00.013 11:26:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:22:00.947 11:26:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:22:00.947 11:26:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:00.947 11:26:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:00.947 11:26:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:01.205 11:26:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:01.205 11:26:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:01.205 11:26:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:01.205 11:26:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:01.464 11:26:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:01.464 11:26:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:01.464 11:26:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:01.464 11:26:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:01.722 11:26:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:01.722 11:26:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:01.722 11:26:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:01.722 11:26:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:01.980 11:26:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:01.980 11:26:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:01.980 11:26:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:01.980 11:26:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:02.238 11:26:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:02.238 11:26:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:02.238 11:26:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:02.238 11:26:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:02.496 11:26:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:02.496 11:26:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:22:02.496 11:26:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:02.754 11:26:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:03.012 11:26:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:22:03.983 11:26:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:22:03.983 11:26:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:03.983 11:26:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:03.983 11:26:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:04.240 11:26:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:04.240 11:26:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:04.240 11:26:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:04.240 11:26:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:04.498 11:26:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:04.498 11:26:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:04.498 11:26:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:04.498 11:26:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:04.756 11:26:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:04.756 11:26:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:04.756 11:26:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:04.756 11:26:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:05.014 11:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:05.014 11:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:05.014 11:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:05.014 11:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:05.272 11:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:05.272 11:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:05.272 11:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:05.272 11:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:05.529 11:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:05.529 11:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 648460 00:22:05.529 11:26:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 648460 ']' 00:22:05.529 11:26:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 648460 00:22:05.529 11:26:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:22:05.529 11:26:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:05.529 11:26:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 648460 00:22:05.529 11:26:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:05.529 11:26:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:05.529 11:26:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 648460' 00:22:05.529 killing process with pid 648460 00:22:05.529 11:26:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 648460 00:22:05.529 11:26:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 648460 00:22:05.790 Connection closed with partial response: 00:22:05.790 00:22:05.790 00:22:05.790 11:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 648460 00:22:05.790 11:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:05.790 [2024-07-12 11:25:57.763185] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:22:05.790 [2024-07-12 11:25:57.763273] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid648460 ] 00:22:05.790 EAL: No free 2048 kB hugepages reported on node 1 00:22:05.790 [2024-07-12 11:25:57.822007] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.790 [2024-07-12 11:25:57.928200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:05.790 Running I/O for 90 seconds... 00:22:05.790 [2024-07-12 11:26:13.492105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.790 [2024-07-12 11:26:13.492162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:05.790 [2024-07-12 11:26:13.492223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.790 [2024-07-12 11:26:13.492245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:05.790 [2024-07-12 11:26:13.492270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.790 [2024-07-12 11:26:13.492287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:05.790 [2024-07-12 11:26:13.492310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.790 [2024-07-12 11:26:13.492327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:05.790 [2024-07-12 11:26:13.492349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.790 [2024-07-12 11:26:13.492365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:05.790 [2024-07-12 11:26:13.492388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.790 [2024-07-12 11:26:13.492404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:05.790 [2024-07-12 11:26:13.492427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.790 [2024-07-12 11:26:13.492443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:05.790 [2024-07-12 11:26:13.492466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.790 [2024-07-12 11:26:13.492482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:05.790 [2024-07-12 11:26:13.492504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.790 [2024-07-12 11:26:13.492520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:05.790 [2024-07-12 11:26:13.492542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.790 [2024-07-12 11:26:13.492558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:05.790 [2024-07-12 11:26:13.492597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.790 [2024-07-12 11:26:13.492622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:05.790 [2024-07-12 11:26:13.492645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.790 [2024-07-12 11:26:13.492661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:05.790 [2024-07-12 11:26:13.492682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.790 [2024-07-12 11:26:13.492698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:05.790 [2024-07-12 11:26:13.492719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.790 [2024-07-12 11:26:13.492735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:05.790 [2024-07-12 11:26:13.492756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.790 [2024-07-12 11:26:13.492772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:05.790 [2024-07-12 11:26:13.492793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.790 [2024-07-12 11:26:13.492809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:05.790 [2024-07-12 11:26:13.492831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.790 [2024-07-12 11:26:13.492862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:05.790 [2024-07-12 11:26:13.492896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.790 [2024-07-12 11:26:13.492913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:05.790 [2024-07-12 11:26:13.492935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.790 [2024-07-12 11:26:13.492952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:05.790 [2024-07-12 11:26:13.492974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.790 [2024-07-12 11:26:13.492990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:05.790 [2024-07-12 11:26:13.493012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.790 [2024-07-12 11:26:13.493028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:05.790 [2024-07-12 11:26:13.493051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.790 [2024-07-12 11:26:13.493067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:05.790 [2024-07-12 11:26:13.493089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.790 [2024-07-12 11:26:13.493105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:05.790 [2024-07-12 11:26:13.493131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.790 [2024-07-12 11:26:13.493148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:05.790 [2024-07-12 11:26:13.493170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.790 [2024-07-12 11:26:13.493200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:05.790 [2024-07-12 11:26:13.493222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.790 [2024-07-12 11:26:13.493238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.790 [2024-07-12 11:26:13.493259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.790 [2024-07-12 11:26:13.493275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:05.790 [2024-07-12 11:26:13.493296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.791 [2024-07-12 11:26:13.493312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:05.791 [2024-07-12 11:26:13.493334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.791 [2024-07-12 11:26:13.493349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:05.791 [2024-07-12 11:26:13.493371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.791 [2024-07-12 11:26:13.493386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:05.791 [2024-07-12 11:26:13.493407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.791 [2024-07-12 11:26:13.493423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:05.791 [2024-07-12 11:26:13.493444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.791 [2024-07-12 11:26:13.493460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:05.791 [2024-07-12 11:26:13.493481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.791 [2024-07-12 11:26:13.493497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:05.791 [2024-07-12 11:26:13.493519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.791 [2024-07-12 11:26:13.493534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:05.791 [2024-07-12 11:26:13.493654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.791 [2024-07-12 11:26:13.493675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:05.791 [2024-07-12 11:26:13.493708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.791 [2024-07-12 11:26:13.493726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:05.791 [2024-07-12 11:26:13.493751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:73240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.791 [2024-07-12 11:26:13.493767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:05.791 [2024-07-12 11:26:13.493808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.791 [2024-07-12 11:26:13.493825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:05.791 [2024-07-12 11:26:13.493850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.791 [2024-07-12 11:26:13.493874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:05.791 [2024-07-12 11:26:13.493901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.791 [2024-07-12 11:26:13.493918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:05.791 [2024-07-12 11:26:13.493942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.791 [2024-07-12 11:26:13.493961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:05.791 [2024-07-12 11:26:13.493987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.791 [2024-07-12 11:26:13.494004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:05.791 [2024-07-12 11:26:13.494029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.791 [2024-07-12 11:26:13.494046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:05.791 [2024-07-12 11:26:13.494071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.791 [2024-07-12 11:26:13.494088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:05.791 [2024-07-12 11:26:13.494113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.791 [2024-07-12 11:26:13.494130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:05.791 [2024-07-12 11:26:13.494156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.791 [2024-07-12 11:26:13.494173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:05.791 [2024-07-12 11:26:13.494198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.791 [2024-07-12 11:26:13.494215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:05.791 [2024-07-12 11:26:13.494240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.791 [2024-07-12 11:26:13.494261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:05.791 [2024-07-12 11:26:13.494287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.791 [2024-07-12 11:26:13.494305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:05.791 [2024-07-12 11:26:13.494330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.791 [2024-07-12 11:26:13.494347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:05.791 [2024-07-12 11:26:13.494372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.791 [2024-07-12 11:26:13.494389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:05.791 [2024-07-12 11:26:13.494414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.791 [2024-07-12 11:26:13.494431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:05.791 [2024-07-12 11:26:13.494457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.791 [2024-07-12 11:26:13.494473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:05.791 [2024-07-12 11:26:13.494498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.791 [2024-07-12 11:26:13.494515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:05.791 [2024-07-12 11:26:13.494542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.791 [2024-07-12 11:26:13.494559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:05.791 [2024-07-12 11:26:13.494584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.791 [2024-07-12 11:26:13.494601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:05.791 [2024-07-12 11:26:13.494626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.791 [2024-07-12 11:26:13.494644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.791 [2024-07-12 11:26:13.494670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.791 [2024-07-12 11:26:13.494686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.791 [2024-07-12 11:26:13.494711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.791 [2024-07-12 11:26:13.494728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:05.791 [2024-07-12 11:26:13.494754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.791 [2024-07-12 11:26:13.494774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:05.791 [2024-07-12 11:26:13.494801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.791 [2024-07-12 11:26:13.494818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:05.791 [2024-07-12 11:26:13.494843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.791 [2024-07-12 11:26:13.494860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:05.791 [2024-07-12 11:26:13.494893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.791 [2024-07-12 11:26:13.494911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:05.791 [2024-07-12 11:26:13.494937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.791 [2024-07-12 11:26:13.494953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:05.791 [2024-07-12 11:26:13.495069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.791 [2024-07-12 11:26:13.495091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:05.791 [2024-07-12 11:26:13.495121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.791 [2024-07-12 11:26:13.495139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:05.791 [2024-07-12 11:26:13.495166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:73480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.791 [2024-07-12 11:26:13.495184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:05.791 [2024-07-12 11:26:13.495211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:73488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.791 [2024-07-12 11:26:13.495227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:05.791 [2024-07-12 11:26:13.495254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:73496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.791 [2024-07-12 11:26:13.495270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:05.792 [2024-07-12 11:26:13.495298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.792 [2024-07-12 11:26:13.495314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:05.792 [2024-07-12 11:26:13.495341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.792 [2024-07-12 11:26:13.495357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:05.792 [2024-07-12 11:26:13.495384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:73520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.792 [2024-07-12 11:26:13.495401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:05.792 [2024-07-12 11:26:13.495436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.792 [2024-07-12 11:26:13.495454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:05.792 [2024-07-12 11:26:13.495481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:73536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.792 [2024-07-12 11:26:13.495497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:05.792 [2024-07-12 11:26:13.495524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:73544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.792 [2024-07-12 11:26:13.495541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:05.792 [2024-07-12 11:26:13.495567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.792 [2024-07-12 11:26:13.495584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:05.792 [2024-07-12 11:26:13.495613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.792 [2024-07-12 11:26:13.495630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:05.792 [2024-07-12 11:26:13.495657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.792 [2024-07-12 11:26:13.495681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:05.792 [2024-07-12 11:26:13.495709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:72560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.792 [2024-07-12 11:26:13.495726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:05.792 [2024-07-12 11:26:13.495754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.792 [2024-07-12 11:26:13.495771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:05.792 [2024-07-12 11:26:13.495798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.792 [2024-07-12 11:26:13.495815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:05.792 [2024-07-12 11:26:13.495842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.792 [2024-07-12 11:26:13.495859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:05.792 [2024-07-12 11:26:13.495894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:72592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.792 [2024-07-12 11:26:13.495912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:05.792 [2024-07-12 11:26:13.495939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.792 [2024-07-12 11:26:13.495956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:05.792 [2024-07-12 11:26:13.495988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:72608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.792 [2024-07-12 11:26:13.496005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:05.792 [2024-07-12 11:26:13.496032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:72616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.792 [2024-07-12 11:26:13.496048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:05.792 [2024-07-12 11:26:13.496075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:72624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.792 [2024-07-12 11:26:13.496092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:05.792 [2024-07-12 11:26:13.496119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:72632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.792 [2024-07-12 11:26:13.496136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:05.792 [2024-07-12 11:26:13.496163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:72640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.792 [2024-07-12 11:26:13.496180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:05.792 [2024-07-12 11:26:13.496207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:72648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.792 [2024-07-12 11:26:13.496223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.792 [2024-07-12 11:26:13.496250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:72656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.792 [2024-07-12 11:26:13.496266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:05.792 [2024-07-12 11:26:13.496293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:72664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.792 [2024-07-12 11:26:13.496310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:05.792 [2024-07-12 11:26:13.496337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.792 [2024-07-12 11:26:13.496354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:05.792 [2024-07-12 11:26:13.496381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.792 [2024-07-12 11:26:13.496398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:05.792 [2024-07-12 11:26:13.496425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:72680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.792 [2024-07-12 11:26:13.496442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:05.792 [2024-07-12 11:26:13.496469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:72688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.792 [2024-07-12 11:26:13.496485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:05.792 [2024-07-12 11:26:13.496512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.792 [2024-07-12 11:26:13.496532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:05.792 [2024-07-12 11:26:13.496560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.792 [2024-07-12 11:26:13.496576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:05.792 [2024-07-12 11:26:13.496604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.792 [2024-07-12 11:26:13.496620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:05.792 [2024-07-12 11:26:13.496647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:72720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.792 [2024-07-12 11:26:13.496663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:05.792 [2024-07-12 11:26:13.496690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:72728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.792 [2024-07-12 11:26:13.496707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:05.792 [2024-07-12 11:26:13.496734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:72736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.792 [2024-07-12 11:26:13.496751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:05.792 [2024-07-12 11:26:13.496778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.792 [2024-07-12 11:26:13.496794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:05.792 [2024-07-12 11:26:13.496822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:72752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.792 [2024-07-12 11:26:13.496838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:05.792 [2024-07-12 11:26:13.496872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:72760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.792 [2024-07-12 11:26:13.496891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:05.792 [2024-07-12 11:26:13.496919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:72768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.792 [2024-07-12 11:26:13.496935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:05.792 [2024-07-12 11:26:13.496962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.792 [2024-07-12 11:26:13.496979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:05.792 [2024-07-12 11:26:13.497006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.792 [2024-07-12 11:26:13.497022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:05.792 [2024-07-12 11:26:13.497050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.792 [2024-07-12 11:26:13.497077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:05.792 [2024-07-12 11:26:13.497106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.792 [2024-07-12 11:26:13.497124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:05.793 [2024-07-12 11:26:13.497151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.793 [2024-07-12 11:26:13.497168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:05.793 [2024-07-12 11:26:13.497195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:72816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.793 [2024-07-12 11:26:13.497212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:05.793 [2024-07-12 11:26:13.497239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:72824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.793 [2024-07-12 11:26:13.497256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:05.793 [2024-07-12 11:26:13.497282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.793 [2024-07-12 11:26:13.497299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:05.793 [2024-07-12 11:26:13.497327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:72840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.793 [2024-07-12 11:26:13.497343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:05.793 [2024-07-12 11:26:13.497370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:72848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.793 [2024-07-12 11:26:13.497386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:05.793 [2024-07-12 11:26:13.497414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:72856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.793 [2024-07-12 11:26:13.497430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:05.793 [2024-07-12 11:26:13.497458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:72864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.793 [2024-07-12 11:26:13.497474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:05.793 [2024-07-12 11:26:13.497502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:72872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.793 [2024-07-12 11:26:13.497518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:05.793 [2024-07-12 11:26:13.497545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:72880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.793 [2024-07-12 11:26:13.497562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:05.793 [2024-07-12 11:26:13.497589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:72888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.793 [2024-07-12 11:26:13.497609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:05.793 [2024-07-12 11:26:13.497638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:72896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.793 [2024-07-12 11:26:13.497654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.793 [2024-07-12 11:26:13.497682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:72904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.793 [2024-07-12 11:26:13.497698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:05.793 [2024-07-12 11:26:13.497725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.793 [2024-07-12 11:26:13.497742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:05.793 [2024-07-12 11:26:13.497769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:72920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.793 [2024-07-12 11:26:13.497786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:05.793 [2024-07-12 11:26:13.497814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:72928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.793 [2024-07-12 11:26:13.497830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:05.793 [2024-07-12 11:26:13.497857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:72936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.793 [2024-07-12 11:26:13.497880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:05.793 [2024-07-12 11:26:13.497910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:72944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.793 [2024-07-12 11:26:13.497927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:05.793 [2024-07-12 11:26:29.085288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.793 [2024-07-12 11:26:29.085364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:05.793 [2024-07-12 11:26:29.085423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:75576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.793 [2024-07-12 11:26:29.085443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:05.793 [2024-07-12 11:26:29.085467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.793 [2024-07-12 11:26:29.085482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:05.793 [2024-07-12 11:26:29.085503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:75608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.793 [2024-07-12 11:26:29.085518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:05.793 [2024-07-12 11:26:29.085539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.793 [2024-07-12 11:26:29.085555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:05.793 [2024-07-12 11:26:29.085585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.793 [2024-07-12 11:26:29.085601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:05.793 [2024-07-12 11:26:29.085623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.793 [2024-07-12 11:26:29.085638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:05.793 [2024-07-12 11:26:29.085659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.793 [2024-07-12 11:26:29.085674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:05.793 [2024-07-12 11:26:29.085694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.793 [2024-07-12 11:26:29.085709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:05.793 [2024-07-12 11:26:29.085730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.793 [2024-07-12 11:26:29.085745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:05.793 [2024-07-12 11:26:29.085765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.793 [2024-07-12 11:26:29.085780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:05.793 [2024-07-12 11:26:29.085800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.793 [2024-07-12 11:26:29.085815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:05.793 [2024-07-12 11:26:29.085837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:75056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.793 [2024-07-12 11:26:29.085851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:05.793 [2024-07-12 11:26:29.085895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.793 [2024-07-12 11:26:29.085914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:05.793 [2024-07-12 11:26:29.085937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:75120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.793 [2024-07-12 11:26:29.085953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:05.793 [2024-07-12 11:26:29.085974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:75152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.793 [2024-07-12 11:26:29.085990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:05.793 [2024-07-12 11:26:29.086011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:75184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.793 [2024-07-12 11:26:29.086027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:05.793 [2024-07-12 11:26:29.086053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:75216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.793 [2024-07-12 11:26:29.086070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:05.793 [2024-07-12 11:26:29.086091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:75248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.793 [2024-07-12 11:26:29.086107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:05.793 [2024-07-12 11:26:29.086128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:75280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.793 [2024-07-12 11:26:29.086144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:05.793 [2024-07-12 11:26:29.086165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:74936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.793 [2024-07-12 11:26:29.086195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:05.793 [2024-07-12 11:26:29.086216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:74968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.793 [2024-07-12 11:26:29.086231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:05.793 [2024-07-12 11:26:29.086268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:75000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.793 [2024-07-12 11:26:29.086290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:05.793 [2024-07-12 11:26:29.086328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.794 [2024-07-12 11:26:29.086345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:05.794 [2024-07-12 11:26:29.086368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:75328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.794 [2024-07-12 11:26:29.086384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:05.794 [2024-07-12 11:26:29.086408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:75360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.794 [2024-07-12 11:26:29.086425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:05.794 [2024-07-12 11:26:29.086447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:75392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.794 [2024-07-12 11:26:29.086463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:05.794 [2024-07-12 11:26:29.086485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.794 [2024-07-12 11:26:29.086501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:05.794 [2024-07-12 11:26:29.086523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:75456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.794 [2024-07-12 11:26:29.086540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:05.794 [2024-07-12 11:26:29.086562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.794 [2024-07-12 11:26:29.086582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:05.794 [2024-07-12 11:26:29.086604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:75520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.794 [2024-07-12 11:26:29.086621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.794 [2024-07-12 11:26:29.086658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.794 [2024-07-12 11:26:29.086674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:05.794 [2024-07-12 11:26:29.086695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.794 [2024-07-12 11:26:29.086727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:05.794 [2024-07-12 11:26:29.086751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.794 [2024-07-12 11:26:29.086767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:05.794 [2024-07-12 11:26:29.087281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.794 [2024-07-12 11:26:29.087306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:05.794 [2024-07-12 11:26:29.087333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.794 [2024-07-12 11:26:29.087351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:05.794 [2024-07-12 11:26:29.087375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.794 [2024-07-12 11:26:29.087391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:05.794 [2024-07-12 11:26:29.087413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.794 [2024-07-12 11:26:29.087430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:05.794 [2024-07-12 11:26:29.087452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.794 [2024-07-12 11:26:29.087468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:05.794 [2024-07-12 11:26:29.087490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.794 [2024-07-12 11:26:29.087506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:05.794 [2024-07-12 11:26:29.087533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.794 [2024-07-12 11:26:29.087551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:05.794 [2024-07-12 11:26:29.087574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.794 [2024-07-12 11:26:29.087595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:05.794 [2024-07-12 11:26:29.087618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.794 [2024-07-12 11:26:29.087635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:05.794 [2024-07-12 11:26:29.087657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.794 [2024-07-12 11:26:29.087674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:05.794 [2024-07-12 11:26:29.087696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.794 [2024-07-12 11:26:29.087712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:05.794 [2024-07-12 11:26:29.087734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.794 [2024-07-12 11:26:29.087750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:05.794 [2024-07-12 11:26:29.087772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.794 [2024-07-12 11:26:29.087788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:05.794 [2024-07-12 11:26:29.087810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:75096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.794 [2024-07-12 11:26:29.087840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:05.794 [2024-07-12 11:26:29.087863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:75128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.794 [2024-07-12 11:26:29.087904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:05.794 [2024-07-12 11:26:29.087928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:75160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.794 [2024-07-12 11:26:29.087944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:05.794 [2024-07-12 11:26:29.087967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:75192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.794 [2024-07-12 11:26:29.087983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:05.794 [2024-07-12 11:26:29.088005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.794 [2024-07-12 11:26:29.088021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:05.794 [2024-07-12 11:26:29.088043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.794 [2024-07-12 11:26:29.088060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:05.794 [2024-07-12 11:26:29.088082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:75288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.794 [2024-07-12 11:26:29.088098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:05.794 [2024-07-12 11:26:29.088125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:75320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.794 [2024-07-12 11:26:29.088142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:05.794 [2024-07-12 11:26:29.088163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:75352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.794 [2024-07-12 11:26:29.088179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:05.794 [2024-07-12 11:26:29.088201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:75384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.795 [2024-07-12 11:26:29.088217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:05.795 [2024-07-12 11:26:29.088255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:75416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.795 [2024-07-12 11:26:29.088271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:05.795 [2024-07-12 11:26:29.088292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.795 [2024-07-12 11:26:29.088308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:05.795 [2024-07-12 11:26:29.088329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.795 [2024-07-12 11:26:29.088344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:05.795 [2024-07-12 11:26:29.088365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.795 [2024-07-12 11:26:29.088381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:05.795 [2024-07-12 11:26:29.088402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.795 [2024-07-12 11:26:29.088418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:05.795 [2024-07-12 11:26:29.088439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.795 [2024-07-12 11:26:29.088455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.795 [2024-07-12 11:26:29.088476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.795 [2024-07-12 11:26:29.088492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:05.795 [2024-07-12 11:26:29.089985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:75584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.795 [2024-07-12 11:26:29.090009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:05.795 [2024-07-12 11:26:29.090036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:75616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.795 [2024-07-12 11:26:29.090054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:05.795 [2024-07-12 11:26:29.090082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:75648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.795 [2024-07-12 11:26:29.090099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:05.795 [2024-07-12 11:26:29.090121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:75680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.795 [2024-07-12 11:26:29.090136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:05.795 [2024-07-12 11:26:29.090158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:75712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.795 [2024-07-12 11:26:29.090190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:05.795 [2024-07-12 11:26:29.090213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:75744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.795 [2024-07-12 11:26:29.090228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:05.795 Received shutdown signal, test time was about 32.261309 seconds 00:22:05.795 00:22:05.795 Latency(us) 00:22:05.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:05.795 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:05.795 Verification LBA range: start 0x0 length 0x4000 00:22:05.795 Nvme0n1 : 32.26 8084.44 31.58 0.00 0.00 15806.15 728.18 4026531.84 00:22:05.795 =================================================================================================================== 00:22:05.795 Total : 8084.44 31.58 0.00 0.00 15806.15 728.18 4026531.84 00:22:05.795 11:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:06.052 11:26:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:22:06.052 11:26:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:06.052 11:26:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:22:06.052 11:26:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:06.052 11:26:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:22:06.052 11:26:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:06.052 11:26:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:22:06.052 11:26:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:06.052 11:26:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:06.052 rmmod nvme_tcp 00:22:06.052 rmmod nvme_fabrics 00:22:06.052 rmmod nvme_keyring 00:22:06.052 11:26:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:06.052 11:26:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:22:06.052 11:26:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:22:06.052 11:26:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 648186 ']' 00:22:06.052 11:26:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 648186 00:22:06.052 11:26:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 648186 ']' 00:22:06.052 11:26:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 648186 00:22:06.052 11:26:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:22:06.052 11:26:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:06.052 11:26:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 648186 00:22:06.310 11:26:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:06.310 11:26:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:06.310 11:26:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 648186' 00:22:06.310 killing process with pid 648186 00:22:06.310 11:26:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 648186 00:22:06.310 11:26:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 648186 00:22:06.569 11:26:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:06.569 11:26:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:06.569 11:26:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:06.569 11:26:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:06.569 11:26:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:06.569 11:26:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.569 11:26:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:06.569 11:26:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.474 11:26:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:08.474 00:22:08.474 real 0m41.066s 00:22:08.474 user 2m3.704s 00:22:08.474 sys 0m10.497s 00:22:08.474 11:26:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:08.474 11:26:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:08.474 ************************************ 00:22:08.474 END TEST nvmf_host_multipath_status 00:22:08.474 ************************************ 00:22:08.474 11:26:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:08.474 11:26:34 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:08.474 11:26:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:08.474 11:26:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:08.474 11:26:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:08.474 ************************************ 00:22:08.474 START TEST nvmf_discovery_remove_ifc 00:22:08.474 ************************************ 00:22:08.474 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:08.733 * Looking for test storage... 00:22:08.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:08.733 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:08.733 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:22:08.733 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:08.733 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:08.733 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:08.733 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:08.733 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:08.733 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:08.733 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:08.733 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:08.733 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:08.733 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:08.733 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:08.733 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:08.733 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:08.733 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:08.733 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:08.733 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:08.733 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:08.733 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:08.733 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:08.733 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:08.733 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.733 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.734 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.734 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:22:08.734 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.734 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:22:08.734 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:08.734 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:08.734 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:08.734 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:08.734 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:08.734 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:08.734 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:08.734 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:08.734 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:08.734 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:08.734 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:08.734 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:08.734 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:08.734 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:08.734 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:08.734 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:08.734 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:08.734 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:08.734 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:08.734 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:08.734 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.734 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:08.734 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.734 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:08.734 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:08.734 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:22:08.734 11:26:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:11.265 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:11.265 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:22:11.265 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:11.265 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:11.265 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:11.265 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:11.265 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:11.265 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:22:11.265 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:11.265 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:22:11.265 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:22:11.265 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:22:11.265 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:22:11.265 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:22:11.265 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:22:11.265 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:11.265 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:11.265 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:11.265 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:11.265 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:11.265 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:11.265 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:11.265 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:11.265 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:11.265 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:11.265 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:11.265 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:11.265 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:11.265 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:11.265 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:11.265 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:11.265 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:11.265 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:11.265 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:11.265 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:11.265 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:11.265 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:11.265 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:11.266 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:11.266 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:11.266 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:11.266 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:11.266 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:22:11.266 00:22:11.266 --- 10.0.0.2 ping statistics --- 00:22:11.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.266 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:11.266 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:11.266 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:22:11.266 00:22:11.266 --- 10.0.0.1 ping statistics --- 00:22:11.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.266 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=654502 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 654502 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 654502 ']' 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:11.266 11:26:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:11.266 [2024-07-12 11:26:36.995808] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:22:11.266 [2024-07-12 11:26:36.995910] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:11.266 EAL: No free 2048 kB hugepages reported on node 1 00:22:11.266 [2024-07-12 11:26:37.061441] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.266 [2024-07-12 11:26:37.166027] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:11.266 [2024-07-12 11:26:37.166078] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:11.266 [2024-07-12 11:26:37.166107] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:11.266 [2024-07-12 11:26:37.166118] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:11.266 [2024-07-12 11:26:37.166128] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:11.266 [2024-07-12 11:26:37.166154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:11.266 11:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:11.266 11:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:22:11.266 11:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:11.266 11:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:11.266 11:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:11.266 11:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:11.266 11:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:11.266 11:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.266 11:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:11.266 [2024-07-12 11:26:37.301638] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:11.266 [2024-07-12 11:26:37.309787] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:11.266 null0 00:22:11.266 [2024-07-12 11:26:37.341765] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:11.266 11:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.266 11:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=654595 00:22:11.266 11:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:11.266 11:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 654595 /tmp/host.sock 00:22:11.266 11:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 654595 ']' 00:22:11.266 11:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:11.266 11:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:11.267 11:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:11.267 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:11.267 11:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:11.267 11:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:11.525 [2024-07-12 11:26:37.407303] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:22:11.525 [2024-07-12 11:26:37.407387] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid654595 ] 00:22:11.525 EAL: No free 2048 kB hugepages reported on node 1 00:22:11.525 [2024-07-12 11:26:37.465280] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.525 [2024-07-12 11:26:37.571708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.525 11:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:11.525 11:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:22:11.525 11:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:11.525 11:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:11.525 11:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.525 11:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:11.525 11:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.525 11:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:11.525 11:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.525 11:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:11.783 11:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.783 11:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:11.783 11:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.783 11:26:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:12.715 [2024-07-12 11:26:38.728575] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:12.715 [2024-07-12 11:26:38.728599] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:12.715 [2024-07-12 11:26:38.728624] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:12.973 [2024-07-12 11:26:38.856065] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:12.973 [2024-07-12 11:26:38.960296] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:12.973 [2024-07-12 11:26:38.960349] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:12.973 [2024-07-12 11:26:38.960386] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:12.973 [2024-07-12 11:26:38.960407] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:12.973 [2024-07-12 11:26:38.960431] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:12.973 11:26:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.973 11:26:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:12.973 11:26:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:12.973 11:26:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:12.973 11:26:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:12.973 11:26:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.973 11:26:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:12.973 11:26:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:12.973 11:26:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:12.973 [2024-07-12 11:26:38.967610] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1cdc870 was disconnected and freed. delete nvme_qpair. 00:22:12.973 11:26:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.973 11:26:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:12.973 11:26:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:22:12.973 11:26:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:22:12.973 11:26:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:12.973 11:26:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:12.973 11:26:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:12.973 11:26:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:12.973 11:26:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.973 11:26:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:12.973 11:26:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:12.973 11:26:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:12.973 11:26:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.973 11:26:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:12.973 11:26:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:14.346 11:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:14.346 11:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:14.346 11:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:14.346 11:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.346 11:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:14.346 11:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:14.346 11:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:14.346 11:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.346 11:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:14.346 11:26:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:15.280 11:26:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:15.280 11:26:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:15.280 11:26:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.280 11:26:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:15.280 11:26:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:15.280 11:26:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:15.280 11:26:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:15.280 11:26:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.280 11:26:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:15.280 11:26:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:16.212 11:26:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:16.212 11:26:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:16.212 11:26:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:16.212 11:26:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.212 11:26:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:16.212 11:26:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:16.212 11:26:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:16.212 11:26:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.212 11:26:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:16.212 11:26:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:17.143 11:26:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:17.143 11:26:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:17.143 11:26:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.143 11:26:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:17.143 11:26:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:17.143 11:26:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:17.143 11:26:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:17.143 11:26:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.143 11:26:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:17.143 11:26:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:18.519 11:26:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:18.519 11:26:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:18.519 11:26:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.519 11:26:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:18.519 11:26:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:18.519 11:26:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:18.520 11:26:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:18.520 11:26:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.520 11:26:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:18.520 11:26:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:18.520 [2024-07-12 11:26:44.401934] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:18.520 [2024-07-12 11:26:44.402001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.520 [2024-07-12 11:26:44.402021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.520 [2024-07-12 11:26:44.402037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.520 [2024-07-12 11:26:44.402051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.520 [2024-07-12 11:26:44.402064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.520 [2024-07-12 11:26:44.402076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.520 [2024-07-12 11:26:44.402089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.520 [2024-07-12 11:26:44.402103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.520 [2024-07-12 11:26:44.402116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:18.520 [2024-07-12 11:26:44.402129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.520 [2024-07-12 11:26:44.402142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3300 is same with the state(5) to be set 00:22:18.520 [2024-07-12 11:26:44.411877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3300 (9): Bad file descriptor 00:22:18.520 [2024-07-12 11:26:44.421920] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:19.449 11:26:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:19.449 11:26:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:19.449 11:26:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:19.449 11:26:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.449 11:26:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:19.449 11:26:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:19.449 11:26:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:19.449 [2024-07-12 11:26:45.469897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:19.449 [2024-07-12 11:26:45.469941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca3300 with addr=10.0.0.2, port=4420 00:22:19.449 [2024-07-12 11:26:45.469961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3300 is same with the state(5) to be set 00:22:19.449 [2024-07-12 11:26:45.469992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca3300 (9): Bad file descriptor 00:22:19.449 [2024-07-12 11:26:45.470376] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:19.449 [2024-07-12 11:26:45.470405] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:19.449 [2024-07-12 11:26:45.470426] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:19.449 [2024-07-12 11:26:45.470442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:19.449 [2024-07-12 11:26:45.470465] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:19.449 [2024-07-12 11:26:45.470481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:19.449 11:26:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.449 11:26:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:19.449 11:26:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:20.381 [2024-07-12 11:26:46.472978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:20.381 [2024-07-12 11:26:46.473029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:20.381 [2024-07-12 11:26:46.473044] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:20.381 [2024-07-12 11:26:46.473059] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:22:20.381 [2024-07-12 11:26:46.473087] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:20.381 [2024-07-12 11:26:46.473124] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:20.381 [2024-07-12 11:26:46.473188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.381 [2024-07-12 11:26:46.473209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.381 [2024-07-12 11:26:46.473227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.381 [2024-07-12 11:26:46.473241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.381 [2024-07-12 11:26:46.473256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.381 [2024-07-12 11:26:46.473270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.381 [2024-07-12 11:26:46.473284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.381 [2024-07-12 11:26:46.473299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.381 [2024-07-12 11:26:46.473312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.381 [2024-07-12 11:26:46.473326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.381 [2024-07-12 11:26:46.473339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:20.381 [2024-07-12 11:26:46.473542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca2780 (9): Bad file descriptor 00:22:20.381 [2024-07-12 11:26:46.474557] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:20.381 [2024-07-12 11:26:46.474580] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:20.381 11:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:20.381 11:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:20.381 11:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:20.381 11:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:20.381 11:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.381 11:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:20.381 11:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:20.381 11:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.639 11:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:20.639 11:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:20.639 11:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:20.639 11:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:20.639 11:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:20.639 11:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:20.639 11:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:20.639 11:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.639 11:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:20.639 11:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:20.639 11:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:20.639 11:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.639 11:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:20.639 11:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:21.572 11:26:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:21.572 11:26:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:21.572 11:26:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:21.572 11:26:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.572 11:26:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:21.572 11:26:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:21.572 11:26:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:21.572 11:26:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.572 11:26:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:21.572 11:26:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:22.505 [2024-07-12 11:26:48.526489] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:22.505 [2024-07-12 11:26:48.526525] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:22.505 [2024-07-12 11:26:48.526547] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:22.764 [2024-07-12 11:26:48.653957] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:22.764 11:26:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:22.764 11:26:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:22.764 11:26:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:22.764 11:26:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.764 11:26:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:22.764 11:26:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:22.764 11:26:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:22.764 11:26:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.764 11:26:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:22.764 11:26:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:22.764 [2024-07-12 11:26:48.878107] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:22.764 [2024-07-12 11:26:48.878153] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:22.764 [2024-07-12 11:26:48.878198] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:22.764 [2024-07-12 11:26:48.878219] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:22.764 [2024-07-12 11:26:48.878231] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:22.764 [2024-07-12 11:26:48.883761] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1caa110 was disconnected and freed. delete nvme_qpair. 00:22:23.730 11:26:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:23.730 11:26:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:23.730 11:26:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:23.730 11:26:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.730 11:26:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:23.730 11:26:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:23.730 11:26:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:23.730 11:26:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.730 11:26:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:23.730 11:26:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:23.730 11:26:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 654595 00:22:23.730 11:26:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 654595 ']' 00:22:23.730 11:26:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 654595 00:22:23.730 11:26:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:22:23.730 11:26:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:23.730 11:26:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 654595 00:22:23.730 11:26:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:23.730 11:26:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:23.730 11:26:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 654595' 00:22:23.730 killing process with pid 654595 00:22:23.730 11:26:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 654595 00:22:23.730 11:26:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 654595 00:22:23.988 11:26:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:23.988 11:26:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:23.988 11:26:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:22:23.988 11:26:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:23.988 11:26:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:22:23.988 11:26:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:23.988 11:26:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:23.988 rmmod nvme_tcp 00:22:23.988 rmmod nvme_fabrics 00:22:23.988 rmmod nvme_keyring 00:22:23.988 11:26:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:23.988 11:26:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:22:23.988 11:26:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:22:23.988 11:26:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 654502 ']' 00:22:23.988 11:26:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 654502 00:22:23.988 11:26:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 654502 ']' 00:22:23.988 11:26:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 654502 00:22:23.988 11:26:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:22:23.988 11:26:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:23.988 11:26:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 654502 00:22:24.246 11:26:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:24.246 11:26:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:24.246 11:26:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 654502' 00:22:24.246 killing process with pid 654502 00:22:24.246 11:26:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 654502 00:22:24.246 11:26:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 654502 00:22:24.506 11:26:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:24.506 11:26:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:24.506 11:26:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:24.506 11:26:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:24.506 11:26:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:24.506 11:26:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.506 11:26:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:24.506 11:26:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.414 11:26:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:26.414 00:22:26.414 real 0m17.881s 00:22:26.414 user 0m25.675s 00:22:26.414 sys 0m3.179s 00:22:26.414 11:26:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:26.414 11:26:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:26.414 ************************************ 00:22:26.414 END TEST nvmf_discovery_remove_ifc 00:22:26.414 ************************************ 00:22:26.414 11:26:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:26.414 11:26:52 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:26.415 11:26:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:26.415 11:26:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:26.415 11:26:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:26.415 ************************************ 00:22:26.415 START TEST nvmf_identify_kernel_target 00:22:26.415 ************************************ 00:22:26.415 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:26.415 * Looking for test storage... 00:22:26.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:26.415 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:26.415 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:22:26.415 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.415 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.415 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.415 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.415 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.415 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.415 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.415 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.415 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:26.415 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:26.415 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:26.415 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:26.415 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:26.415 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:26.415 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:26.415 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:26.415 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:26.415 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.415 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.415 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.415 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.415 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.415 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.674 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:22:26.674 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.674 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:22:26.674 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:26.674 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:26.674 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:26.674 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.674 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.674 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:26.674 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:26.674 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:26.674 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:22:26.674 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:26.674 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.674 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:26.674 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:26.674 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:26.674 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.674 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:26.674 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.674 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:26.674 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:26.674 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:22:26.674 11:26:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:28.575 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:28.575 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:28.575 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:28.576 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:28.576 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:28.576 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:28.836 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:28.836 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:28.836 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:28.837 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:28.837 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:28.837 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:28.837 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:22:28.837 00:22:28.837 --- 10.0.0.2 ping statistics --- 00:22:28.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.837 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:22:28.837 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:28.837 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:28.837 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:22:28.837 00:22:28.837 --- 10.0.0.1 ping statistics --- 00:22:28.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.837 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:22:28.837 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:28.837 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:22:28.837 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:28.837 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:28.837 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:28.837 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:28.837 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:28.837 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:28.837 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:28.837 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:22:28.837 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:22:28.837 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:22:28.837 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:28.837 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:28.837 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:28.837 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:28.837 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:28.837 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:28.837 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:28.837 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:28.837 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:28.837 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:22:28.837 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:22:28.837 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:22:28.837 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:22:28.837 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:28.837 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:28.837 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:28.837 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:22:28.837 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:22:28.837 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:22:28.837 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:28.837 11:26:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:22:29.776 Waiting for block devices as requested 00:22:30.036 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:22:30.036 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:30.036 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:30.296 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:30.296 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:30.296 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:30.554 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:30.554 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:30.554 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:30.554 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:30.814 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:30.814 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:30.814 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:31.073 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:31.073 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:31.073 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:31.073 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:31.332 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:31.332 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:31.332 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:22:31.333 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:22:31.333 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:31.333 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:31.333 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:22:31.333 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:31.333 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:31.333 No valid GPT data, bailing 00:22:31.333 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:31.333 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:22:31.333 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:22:31.333 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:22:31.333 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:22:31.333 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:31.333 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:31.333 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:31.333 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:31.333 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:22:31.333 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:22:31.333 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:22:31.333 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:22:31.333 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:22:31.333 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:22:31.333 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:22:31.333 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:31.333 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:22:31.333 00:22:31.333 Discovery Log Number of Records 2, Generation counter 2 00:22:31.333 =====Discovery Log Entry 0====== 00:22:31.333 trtype: tcp 00:22:31.333 adrfam: ipv4 00:22:31.333 subtype: current discovery subsystem 00:22:31.333 treq: not specified, sq flow control disable supported 00:22:31.333 portid: 1 00:22:31.333 trsvcid: 4420 00:22:31.333 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:31.333 traddr: 10.0.0.1 00:22:31.333 eflags: none 00:22:31.333 sectype: none 00:22:31.333 =====Discovery Log Entry 1====== 00:22:31.333 trtype: tcp 00:22:31.333 adrfam: ipv4 00:22:31.333 subtype: nvme subsystem 00:22:31.333 treq: not specified, sq flow control disable supported 00:22:31.333 portid: 1 00:22:31.333 trsvcid: 4420 00:22:31.333 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:31.333 traddr: 10.0.0.1 00:22:31.333 eflags: none 00:22:31.333 sectype: none 00:22:31.333 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:22:31.333 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:22:31.592 EAL: No free 2048 kB hugepages reported on node 1 00:22:31.592 ===================================================== 00:22:31.592 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:31.592 ===================================================== 00:22:31.592 Controller Capabilities/Features 00:22:31.592 ================================ 00:22:31.592 Vendor ID: 0000 00:22:31.592 Subsystem Vendor ID: 0000 00:22:31.592 Serial Number: dbccabf289626fa56bd2 00:22:31.592 Model Number: Linux 00:22:31.592 Firmware Version: 6.7.0-68 00:22:31.592 Recommended Arb Burst: 0 00:22:31.592 IEEE OUI Identifier: 00 00 00 00:22:31.592 Multi-path I/O 00:22:31.592 May have multiple subsystem ports: No 00:22:31.592 May have multiple controllers: No 00:22:31.592 Associated with SR-IOV VF: No 00:22:31.592 Max Data Transfer Size: Unlimited 00:22:31.592 Max Number of Namespaces: 0 00:22:31.592 Max Number of I/O Queues: 1024 00:22:31.592 NVMe Specification Version (VS): 1.3 00:22:31.592 NVMe Specification Version (Identify): 1.3 00:22:31.592 Maximum Queue Entries: 1024 00:22:31.592 Contiguous Queues Required: No 00:22:31.592 Arbitration Mechanisms Supported 00:22:31.592 Weighted Round Robin: Not Supported 00:22:31.592 Vendor Specific: Not Supported 00:22:31.592 Reset Timeout: 7500 ms 00:22:31.592 Doorbell Stride: 4 bytes 00:22:31.592 NVM Subsystem Reset: Not Supported 00:22:31.592 Command Sets Supported 00:22:31.592 NVM Command Set: Supported 00:22:31.592 Boot Partition: Not Supported 00:22:31.592 Memory Page Size Minimum: 4096 bytes 00:22:31.592 Memory Page Size Maximum: 4096 bytes 00:22:31.592 Persistent Memory Region: Not Supported 00:22:31.592 Optional Asynchronous Events Supported 00:22:31.592 Namespace Attribute Notices: Not Supported 00:22:31.593 Firmware Activation Notices: Not Supported 00:22:31.593 ANA Change Notices: Not Supported 00:22:31.593 PLE Aggregate Log Change Notices: Not Supported 00:22:31.593 LBA Status Info Alert Notices: Not Supported 00:22:31.593 EGE Aggregate Log Change Notices: Not Supported 00:22:31.593 Normal NVM Subsystem Shutdown event: Not Supported 00:22:31.593 Zone Descriptor Change Notices: Not Supported 00:22:31.593 Discovery Log Change Notices: Supported 00:22:31.593 Controller Attributes 00:22:31.593 128-bit Host Identifier: Not Supported 00:22:31.593 Non-Operational Permissive Mode: Not Supported 00:22:31.593 NVM Sets: Not Supported 00:22:31.593 Read Recovery Levels: Not Supported 00:22:31.593 Endurance Groups: Not Supported 00:22:31.593 Predictable Latency Mode: Not Supported 00:22:31.593 Traffic Based Keep ALive: Not Supported 00:22:31.593 Namespace Granularity: Not Supported 00:22:31.593 SQ Associations: Not Supported 00:22:31.593 UUID List: Not Supported 00:22:31.593 Multi-Domain Subsystem: Not Supported 00:22:31.593 Fixed Capacity Management: Not Supported 00:22:31.593 Variable Capacity Management: Not Supported 00:22:31.593 Delete Endurance Group: Not Supported 00:22:31.593 Delete NVM Set: Not Supported 00:22:31.593 Extended LBA Formats Supported: Not Supported 00:22:31.593 Flexible Data Placement Supported: Not Supported 00:22:31.593 00:22:31.593 Controller Memory Buffer Support 00:22:31.593 ================================ 00:22:31.593 Supported: No 00:22:31.593 00:22:31.593 Persistent Memory Region Support 00:22:31.593 ================================ 00:22:31.593 Supported: No 00:22:31.593 00:22:31.593 Admin Command Set Attributes 00:22:31.593 ============================ 00:22:31.593 Security Send/Receive: Not Supported 00:22:31.593 Format NVM: Not Supported 00:22:31.593 Firmware Activate/Download: Not Supported 00:22:31.593 Namespace Management: Not Supported 00:22:31.593 Device Self-Test: Not Supported 00:22:31.593 Directives: Not Supported 00:22:31.593 NVMe-MI: Not Supported 00:22:31.593 Virtualization Management: Not Supported 00:22:31.593 Doorbell Buffer Config: Not Supported 00:22:31.593 Get LBA Status Capability: Not Supported 00:22:31.593 Command & Feature Lockdown Capability: Not Supported 00:22:31.593 Abort Command Limit: 1 00:22:31.593 Async Event Request Limit: 1 00:22:31.593 Number of Firmware Slots: N/A 00:22:31.593 Firmware Slot 1 Read-Only: N/A 00:22:31.593 Firmware Activation Without Reset: N/A 00:22:31.593 Multiple Update Detection Support: N/A 00:22:31.593 Firmware Update Granularity: No Information Provided 00:22:31.593 Per-Namespace SMART Log: No 00:22:31.593 Asymmetric Namespace Access Log Page: Not Supported 00:22:31.593 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:31.593 Command Effects Log Page: Not Supported 00:22:31.593 Get Log Page Extended Data: Supported 00:22:31.593 Telemetry Log Pages: Not Supported 00:22:31.593 Persistent Event Log Pages: Not Supported 00:22:31.593 Supported Log Pages Log Page: May Support 00:22:31.593 Commands Supported & Effects Log Page: Not Supported 00:22:31.593 Feature Identifiers & Effects Log Page:May Support 00:22:31.593 NVMe-MI Commands & Effects Log Page: May Support 00:22:31.593 Data Area 4 for Telemetry Log: Not Supported 00:22:31.593 Error Log Page Entries Supported: 1 00:22:31.593 Keep Alive: Not Supported 00:22:31.593 00:22:31.593 NVM Command Set Attributes 00:22:31.593 ========================== 00:22:31.593 Submission Queue Entry Size 00:22:31.593 Max: 1 00:22:31.593 Min: 1 00:22:31.593 Completion Queue Entry Size 00:22:31.593 Max: 1 00:22:31.593 Min: 1 00:22:31.593 Number of Namespaces: 0 00:22:31.593 Compare Command: Not Supported 00:22:31.593 Write Uncorrectable Command: Not Supported 00:22:31.593 Dataset Management Command: Not Supported 00:22:31.593 Write Zeroes Command: Not Supported 00:22:31.593 Set Features Save Field: Not Supported 00:22:31.593 Reservations: Not Supported 00:22:31.593 Timestamp: Not Supported 00:22:31.593 Copy: Not Supported 00:22:31.593 Volatile Write Cache: Not Present 00:22:31.593 Atomic Write Unit (Normal): 1 00:22:31.593 Atomic Write Unit (PFail): 1 00:22:31.593 Atomic Compare & Write Unit: 1 00:22:31.593 Fused Compare & Write: Not Supported 00:22:31.593 Scatter-Gather List 00:22:31.593 SGL Command Set: Supported 00:22:31.593 SGL Keyed: Not Supported 00:22:31.593 SGL Bit Bucket Descriptor: Not Supported 00:22:31.593 SGL Metadata Pointer: Not Supported 00:22:31.593 Oversized SGL: Not Supported 00:22:31.593 SGL Metadata Address: Not Supported 00:22:31.593 SGL Offset: Supported 00:22:31.593 Transport SGL Data Block: Not Supported 00:22:31.593 Replay Protected Memory Block: Not Supported 00:22:31.593 00:22:31.593 Firmware Slot Information 00:22:31.593 ========================= 00:22:31.593 Active slot: 0 00:22:31.593 00:22:31.593 00:22:31.593 Error Log 00:22:31.593 ========= 00:22:31.593 00:22:31.593 Active Namespaces 00:22:31.593 ================= 00:22:31.593 Discovery Log Page 00:22:31.593 ================== 00:22:31.593 Generation Counter: 2 00:22:31.593 Number of Records: 2 00:22:31.593 Record Format: 0 00:22:31.593 00:22:31.593 Discovery Log Entry 0 00:22:31.593 ---------------------- 00:22:31.593 Transport Type: 3 (TCP) 00:22:31.593 Address Family: 1 (IPv4) 00:22:31.593 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:31.593 Entry Flags: 00:22:31.593 Duplicate Returned Information: 0 00:22:31.593 Explicit Persistent Connection Support for Discovery: 0 00:22:31.593 Transport Requirements: 00:22:31.593 Secure Channel: Not Specified 00:22:31.593 Port ID: 1 (0x0001) 00:22:31.593 Controller ID: 65535 (0xffff) 00:22:31.593 Admin Max SQ Size: 32 00:22:31.593 Transport Service Identifier: 4420 00:22:31.593 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:31.593 Transport Address: 10.0.0.1 00:22:31.593 Discovery Log Entry 1 00:22:31.593 ---------------------- 00:22:31.593 Transport Type: 3 (TCP) 00:22:31.593 Address Family: 1 (IPv4) 00:22:31.593 Subsystem Type: 2 (NVM Subsystem) 00:22:31.593 Entry Flags: 00:22:31.593 Duplicate Returned Information: 0 00:22:31.593 Explicit Persistent Connection Support for Discovery: 0 00:22:31.593 Transport Requirements: 00:22:31.593 Secure Channel: Not Specified 00:22:31.593 Port ID: 1 (0x0001) 00:22:31.593 Controller ID: 65535 (0xffff) 00:22:31.593 Admin Max SQ Size: 32 00:22:31.593 Transport Service Identifier: 4420 00:22:31.593 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:22:31.593 Transport Address: 10.0.0.1 00:22:31.593 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:31.593 EAL: No free 2048 kB hugepages reported on node 1 00:22:31.593 get_feature(0x01) failed 00:22:31.593 get_feature(0x02) failed 00:22:31.593 get_feature(0x04) failed 00:22:31.593 ===================================================== 00:22:31.593 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:31.593 ===================================================== 00:22:31.593 Controller Capabilities/Features 00:22:31.593 ================================ 00:22:31.593 Vendor ID: 0000 00:22:31.593 Subsystem Vendor ID: 0000 00:22:31.593 Serial Number: 7d516d479ea4def2d43c 00:22:31.593 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:22:31.593 Firmware Version: 6.7.0-68 00:22:31.593 Recommended Arb Burst: 6 00:22:31.593 IEEE OUI Identifier: 00 00 00 00:22:31.593 Multi-path I/O 00:22:31.593 May have multiple subsystem ports: Yes 00:22:31.593 May have multiple controllers: Yes 00:22:31.593 Associated with SR-IOV VF: No 00:22:31.593 Max Data Transfer Size: Unlimited 00:22:31.593 Max Number of Namespaces: 1024 00:22:31.593 Max Number of I/O Queues: 128 00:22:31.593 NVMe Specification Version (VS): 1.3 00:22:31.593 NVMe Specification Version (Identify): 1.3 00:22:31.593 Maximum Queue Entries: 1024 00:22:31.593 Contiguous Queues Required: No 00:22:31.593 Arbitration Mechanisms Supported 00:22:31.593 Weighted Round Robin: Not Supported 00:22:31.593 Vendor Specific: Not Supported 00:22:31.593 Reset Timeout: 7500 ms 00:22:31.593 Doorbell Stride: 4 bytes 00:22:31.593 NVM Subsystem Reset: Not Supported 00:22:31.593 Command Sets Supported 00:22:31.593 NVM Command Set: Supported 00:22:31.593 Boot Partition: Not Supported 00:22:31.593 Memory Page Size Minimum: 4096 bytes 00:22:31.593 Memory Page Size Maximum: 4096 bytes 00:22:31.593 Persistent Memory Region: Not Supported 00:22:31.593 Optional Asynchronous Events Supported 00:22:31.593 Namespace Attribute Notices: Supported 00:22:31.593 Firmware Activation Notices: Not Supported 00:22:31.593 ANA Change Notices: Supported 00:22:31.593 PLE Aggregate Log Change Notices: Not Supported 00:22:31.593 LBA Status Info Alert Notices: Not Supported 00:22:31.593 EGE Aggregate Log Change Notices: Not Supported 00:22:31.593 Normal NVM Subsystem Shutdown event: Not Supported 00:22:31.593 Zone Descriptor Change Notices: Not Supported 00:22:31.593 Discovery Log Change Notices: Not Supported 00:22:31.593 Controller Attributes 00:22:31.593 128-bit Host Identifier: Supported 00:22:31.593 Non-Operational Permissive Mode: Not Supported 00:22:31.593 NVM Sets: Not Supported 00:22:31.593 Read Recovery Levels: Not Supported 00:22:31.593 Endurance Groups: Not Supported 00:22:31.593 Predictable Latency Mode: Not Supported 00:22:31.593 Traffic Based Keep ALive: Supported 00:22:31.593 Namespace Granularity: Not Supported 00:22:31.593 SQ Associations: Not Supported 00:22:31.593 UUID List: Not Supported 00:22:31.593 Multi-Domain Subsystem: Not Supported 00:22:31.593 Fixed Capacity Management: Not Supported 00:22:31.593 Variable Capacity Management: Not Supported 00:22:31.593 Delete Endurance Group: Not Supported 00:22:31.593 Delete NVM Set: Not Supported 00:22:31.593 Extended LBA Formats Supported: Not Supported 00:22:31.593 Flexible Data Placement Supported: Not Supported 00:22:31.593 00:22:31.593 Controller Memory Buffer Support 00:22:31.593 ================================ 00:22:31.593 Supported: No 00:22:31.593 00:22:31.593 Persistent Memory Region Support 00:22:31.593 ================================ 00:22:31.593 Supported: No 00:22:31.593 00:22:31.593 Admin Command Set Attributes 00:22:31.593 ============================ 00:22:31.593 Security Send/Receive: Not Supported 00:22:31.593 Format NVM: Not Supported 00:22:31.593 Firmware Activate/Download: Not Supported 00:22:31.593 Namespace Management: Not Supported 00:22:31.593 Device Self-Test: Not Supported 00:22:31.593 Directives: Not Supported 00:22:31.593 NVMe-MI: Not Supported 00:22:31.593 Virtualization Management: Not Supported 00:22:31.593 Doorbell Buffer Config: Not Supported 00:22:31.593 Get LBA Status Capability: Not Supported 00:22:31.593 Command & Feature Lockdown Capability: Not Supported 00:22:31.593 Abort Command Limit: 4 00:22:31.593 Async Event Request Limit: 4 00:22:31.593 Number of Firmware Slots: N/A 00:22:31.593 Firmware Slot 1 Read-Only: N/A 00:22:31.593 Firmware Activation Without Reset: N/A 00:22:31.593 Multiple Update Detection Support: N/A 00:22:31.593 Firmware Update Granularity: No Information Provided 00:22:31.593 Per-Namespace SMART Log: Yes 00:22:31.593 Asymmetric Namespace Access Log Page: Supported 00:22:31.593 ANA Transition Time : 10 sec 00:22:31.593 00:22:31.593 Asymmetric Namespace Access Capabilities 00:22:31.593 ANA Optimized State : Supported 00:22:31.593 ANA Non-Optimized State : Supported 00:22:31.593 ANA Inaccessible State : Supported 00:22:31.593 ANA Persistent Loss State : Supported 00:22:31.593 ANA Change State : Supported 00:22:31.593 ANAGRPID is not changed : No 00:22:31.593 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:22:31.593 00:22:31.593 ANA Group Identifier Maximum : 128 00:22:31.593 Number of ANA Group Identifiers : 128 00:22:31.593 Max Number of Allowed Namespaces : 1024 00:22:31.593 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:22:31.593 Command Effects Log Page: Supported 00:22:31.593 Get Log Page Extended Data: Supported 00:22:31.593 Telemetry Log Pages: Not Supported 00:22:31.593 Persistent Event Log Pages: Not Supported 00:22:31.593 Supported Log Pages Log Page: May Support 00:22:31.593 Commands Supported & Effects Log Page: Not Supported 00:22:31.593 Feature Identifiers & Effects Log Page:May Support 00:22:31.593 NVMe-MI Commands & Effects Log Page: May Support 00:22:31.593 Data Area 4 for Telemetry Log: Not Supported 00:22:31.593 Error Log Page Entries Supported: 128 00:22:31.593 Keep Alive: Supported 00:22:31.593 Keep Alive Granularity: 1000 ms 00:22:31.593 00:22:31.593 NVM Command Set Attributes 00:22:31.593 ========================== 00:22:31.593 Submission Queue Entry Size 00:22:31.593 Max: 64 00:22:31.593 Min: 64 00:22:31.593 Completion Queue Entry Size 00:22:31.593 Max: 16 00:22:31.593 Min: 16 00:22:31.593 Number of Namespaces: 1024 00:22:31.593 Compare Command: Not Supported 00:22:31.593 Write Uncorrectable Command: Not Supported 00:22:31.593 Dataset Management Command: Supported 00:22:31.593 Write Zeroes Command: Supported 00:22:31.593 Set Features Save Field: Not Supported 00:22:31.593 Reservations: Not Supported 00:22:31.593 Timestamp: Not Supported 00:22:31.593 Copy: Not Supported 00:22:31.593 Volatile Write Cache: Present 00:22:31.593 Atomic Write Unit (Normal): 1 00:22:31.593 Atomic Write Unit (PFail): 1 00:22:31.593 Atomic Compare & Write Unit: 1 00:22:31.593 Fused Compare & Write: Not Supported 00:22:31.593 Scatter-Gather List 00:22:31.593 SGL Command Set: Supported 00:22:31.593 SGL Keyed: Not Supported 00:22:31.593 SGL Bit Bucket Descriptor: Not Supported 00:22:31.593 SGL Metadata Pointer: Not Supported 00:22:31.593 Oversized SGL: Not Supported 00:22:31.593 SGL Metadata Address: Not Supported 00:22:31.593 SGL Offset: Supported 00:22:31.593 Transport SGL Data Block: Not Supported 00:22:31.593 Replay Protected Memory Block: Not Supported 00:22:31.593 00:22:31.593 Firmware Slot Information 00:22:31.593 ========================= 00:22:31.593 Active slot: 0 00:22:31.593 00:22:31.593 Asymmetric Namespace Access 00:22:31.593 =========================== 00:22:31.593 Change Count : 0 00:22:31.593 Number of ANA Group Descriptors : 1 00:22:31.593 ANA Group Descriptor : 0 00:22:31.593 ANA Group ID : 1 00:22:31.593 Number of NSID Values : 1 00:22:31.593 Change Count : 0 00:22:31.594 ANA State : 1 00:22:31.594 Namespace Identifier : 1 00:22:31.594 00:22:31.594 Commands Supported and Effects 00:22:31.594 ============================== 00:22:31.594 Admin Commands 00:22:31.594 -------------- 00:22:31.594 Get Log Page (02h): Supported 00:22:31.594 Identify (06h): Supported 00:22:31.594 Abort (08h): Supported 00:22:31.594 Set Features (09h): Supported 00:22:31.594 Get Features (0Ah): Supported 00:22:31.594 Asynchronous Event Request (0Ch): Supported 00:22:31.594 Keep Alive (18h): Supported 00:22:31.594 I/O Commands 00:22:31.594 ------------ 00:22:31.594 Flush (00h): Supported 00:22:31.594 Write (01h): Supported LBA-Change 00:22:31.594 Read (02h): Supported 00:22:31.594 Write Zeroes (08h): Supported LBA-Change 00:22:31.594 Dataset Management (09h): Supported 00:22:31.594 00:22:31.594 Error Log 00:22:31.594 ========= 00:22:31.594 Entry: 0 00:22:31.594 Error Count: 0x3 00:22:31.594 Submission Queue Id: 0x0 00:22:31.594 Command Id: 0x5 00:22:31.594 Phase Bit: 0 00:22:31.594 Status Code: 0x2 00:22:31.594 Status Code Type: 0x0 00:22:31.594 Do Not Retry: 1 00:22:31.594 Error Location: 0x28 00:22:31.594 LBA: 0x0 00:22:31.594 Namespace: 0x0 00:22:31.594 Vendor Log Page: 0x0 00:22:31.594 ----------- 00:22:31.594 Entry: 1 00:22:31.594 Error Count: 0x2 00:22:31.594 Submission Queue Id: 0x0 00:22:31.594 Command Id: 0x5 00:22:31.594 Phase Bit: 0 00:22:31.594 Status Code: 0x2 00:22:31.594 Status Code Type: 0x0 00:22:31.594 Do Not Retry: 1 00:22:31.594 Error Location: 0x28 00:22:31.594 LBA: 0x0 00:22:31.594 Namespace: 0x0 00:22:31.594 Vendor Log Page: 0x0 00:22:31.594 ----------- 00:22:31.594 Entry: 2 00:22:31.594 Error Count: 0x1 00:22:31.594 Submission Queue Id: 0x0 00:22:31.594 Command Id: 0x4 00:22:31.594 Phase Bit: 0 00:22:31.594 Status Code: 0x2 00:22:31.594 Status Code Type: 0x0 00:22:31.594 Do Not Retry: 1 00:22:31.594 Error Location: 0x28 00:22:31.594 LBA: 0x0 00:22:31.594 Namespace: 0x0 00:22:31.594 Vendor Log Page: 0x0 00:22:31.594 00:22:31.594 Number of Queues 00:22:31.594 ================ 00:22:31.594 Number of I/O Submission Queues: 128 00:22:31.594 Number of I/O Completion Queues: 128 00:22:31.594 00:22:31.594 ZNS Specific Controller Data 00:22:31.594 ============================ 00:22:31.594 Zone Append Size Limit: 0 00:22:31.594 00:22:31.594 00:22:31.594 Active Namespaces 00:22:31.594 ================= 00:22:31.594 get_feature(0x05) failed 00:22:31.594 Namespace ID:1 00:22:31.594 Command Set Identifier: NVM (00h) 00:22:31.594 Deallocate: Supported 00:22:31.594 Deallocated/Unwritten Error: Not Supported 00:22:31.594 Deallocated Read Value: Unknown 00:22:31.594 Deallocate in Write Zeroes: Not Supported 00:22:31.594 Deallocated Guard Field: 0xFFFF 00:22:31.594 Flush: Supported 00:22:31.594 Reservation: Not Supported 00:22:31.594 Namespace Sharing Capabilities: Multiple Controllers 00:22:31.594 Size (in LBAs): 1953525168 (931GiB) 00:22:31.594 Capacity (in LBAs): 1953525168 (931GiB) 00:22:31.594 Utilization (in LBAs): 1953525168 (931GiB) 00:22:31.594 UUID: 42cdd780-605d-407c-aa38-a84c8ccc85c2 00:22:31.594 Thin Provisioning: Not Supported 00:22:31.594 Per-NS Atomic Units: Yes 00:22:31.594 Atomic Boundary Size (Normal): 0 00:22:31.594 Atomic Boundary Size (PFail): 0 00:22:31.594 Atomic Boundary Offset: 0 00:22:31.594 NGUID/EUI64 Never Reused: No 00:22:31.594 ANA group ID: 1 00:22:31.594 Namespace Write Protected: No 00:22:31.594 Number of LBA Formats: 1 00:22:31.594 Current LBA Format: LBA Format #00 00:22:31.594 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:31.594 00:22:31.594 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:22:31.594 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:31.594 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:22:31.594 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:31.594 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:22:31.594 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:31.594 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:31.594 rmmod nvme_tcp 00:22:31.594 rmmod nvme_fabrics 00:22:31.594 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:31.594 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:22:31.594 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:22:31.594 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:31.594 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:31.594 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:31.594 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:31.594 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:31.594 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:31.594 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.594 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:31.594 11:26:57 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.124 11:26:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:34.124 11:26:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:22:34.124 11:26:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:22:34.124 11:26:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:22:34.124 11:26:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:34.124 11:26:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:34.124 11:26:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:34.124 11:26:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:34.124 11:26:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:22:34.124 11:26:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:22:34.124 11:26:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:22:35.057 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:22:35.057 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:22:35.057 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:22:35.057 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:22:35.057 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:22:35.057 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:22:35.057 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:22:35.057 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:22:35.057 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:22:35.057 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:22:35.057 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:22:35.057 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:22:35.057 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:22:35.057 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:22:35.057 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:22:35.057 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:22:35.993 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:22:36.251 00:22:36.251 real 0m9.662s 00:22:36.251 user 0m2.054s 00:22:36.251 sys 0m3.502s 00:22:36.251 11:27:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:36.251 11:27:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.251 ************************************ 00:22:36.251 END TEST nvmf_identify_kernel_target 00:22:36.251 ************************************ 00:22:36.251 11:27:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:36.251 11:27:02 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:36.251 11:27:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:36.251 11:27:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:36.251 11:27:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:36.251 ************************************ 00:22:36.251 START TEST nvmf_auth_host 00:22:36.251 ************************************ 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:36.251 * Looking for test storage... 00:22:36.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:36.251 11:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:36.252 11:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:36.252 11:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:36.252 11:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:36.252 11:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.252 11:27:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:36.252 11:27:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.252 11:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:36.252 11:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:36.252 11:27:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:22:36.252 11:27:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:38.781 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:38.781 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:38.781 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:38.781 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:38.781 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:38.781 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:38.781 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:22:38.781 00:22:38.781 --- 10.0.0.2 ping statistics --- 00:22:38.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.782 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:38.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:38.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:22:38.782 00:22:38.782 --- 10.0.0.1 ping statistics --- 00:22:38.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.782 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=661709 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 661709 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 661709 ']' 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=55cfbf862327bdaf7ef7d4e590f0b320 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.mC9 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 55cfbf862327bdaf7ef7d4e590f0b320 0 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 55cfbf862327bdaf7ef7d4e590f0b320 0 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=55cfbf862327bdaf7ef7d4e590f0b320 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.mC9 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.mC9 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.mC9 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f490ad7d24068c2028ee079bb0576684fd537ebf1be81bc2bc3b5bcd474ac50f 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.C86 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f490ad7d24068c2028ee079bb0576684fd537ebf1be81bc2bc3b5bcd474ac50f 3 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f490ad7d24068c2028ee079bb0576684fd537ebf1be81bc2bc3b5bcd474ac50f 3 00:22:38.782 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:39.040 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:39.040 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f490ad7d24068c2028ee079bb0576684fd537ebf1be81bc2bc3b5bcd474ac50f 00:22:39.040 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:22:39.040 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:39.040 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.C86 00:22:39.040 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.C86 00:22:39.040 11:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.C86 00:22:39.040 11:27:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:22:39.040 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:39.040 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:39.040 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:39.040 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:22:39.040 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:22:39.040 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:39.040 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9c5bd7200363a13c54b3a2301eec90db068e746c339f3c34 00:22:39.040 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:39.040 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.xzV 00:22:39.040 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9c5bd7200363a13c54b3a2301eec90db068e746c339f3c34 0 00:22:39.040 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9c5bd7200363a13c54b3a2301eec90db068e746c339f3c34 0 00:22:39.040 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:39.040 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:39.040 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9c5bd7200363a13c54b3a2301eec90db068e746c339f3c34 00:22:39.040 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:22:39.040 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:39.040 11:27:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.xzV 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.xzV 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.xzV 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2cc5d805713d0c8350ea10f7f9ca47dddc2fbb1a3b098535 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.u15 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2cc5d805713d0c8350ea10f7f9ca47dddc2fbb1a3b098535 2 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2cc5d805713d0c8350ea10f7f9ca47dddc2fbb1a3b098535 2 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2cc5d805713d0c8350ea10f7f9ca47dddc2fbb1a3b098535 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.u15 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.u15 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.u15 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9d41467a215bc54e23b743923b029d3f 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.RfS 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9d41467a215bc54e23b743923b029d3f 1 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9d41467a215bc54e23b743923b029d3f 1 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9d41467a215bc54e23b743923b029d3f 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.RfS 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.RfS 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.RfS 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:39.040 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:22:39.041 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:39.041 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:39.041 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=37f30f57f175eb11105159158417d2ca 00:22:39.041 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:22:39.041 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Sad 00:22:39.041 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 37f30f57f175eb11105159158417d2ca 1 00:22:39.041 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 37f30f57f175eb11105159158417d2ca 1 00:22:39.041 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:39.041 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:39.041 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=37f30f57f175eb11105159158417d2ca 00:22:39.041 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:22:39.041 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:39.041 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Sad 00:22:39.041 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Sad 00:22:39.041 11:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Sad 00:22:39.041 11:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:22:39.041 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:39.041 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:39.041 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:39.041 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:22:39.041 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:22:39.041 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:39.041 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3fc94caf5a6ceb6b26561b09bd7048c004c758f7de40eb4d 00:22:39.041 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:22:39.041 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Cjp 00:22:39.041 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3fc94caf5a6ceb6b26561b09bd7048c004c758f7de40eb4d 2 00:22:39.041 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3fc94caf5a6ceb6b26561b09bd7048c004c758f7de40eb4d 2 00:22:39.041 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:39.041 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:39.041 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3fc94caf5a6ceb6b26561b09bd7048c004c758f7de40eb4d 00:22:39.041 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:22:39.041 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:39.298 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Cjp 00:22:39.298 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Cjp 00:22:39.298 11:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Cjp 00:22:39.298 11:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:22:39.298 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:39.298 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:39.298 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:39.298 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:22:39.298 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:39.298 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:39.298 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=071b0aca85fd825a9050ed07e1955adb 00:22:39.298 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:39.298 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.y8L 00:22:39.298 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 071b0aca85fd825a9050ed07e1955adb 0 00:22:39.298 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 071b0aca85fd825a9050ed07e1955adb 0 00:22:39.298 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:39.298 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:39.298 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=071b0aca85fd825a9050ed07e1955adb 00:22:39.298 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:22:39.298 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:39.298 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.y8L 00:22:39.298 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.y8L 00:22:39.298 11:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.y8L 00:22:39.298 11:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:22:39.298 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:39.298 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:39.298 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:39.298 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:22:39.298 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:22:39.298 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:39.298 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=358938bc7d572705c561b2f42c11d1bb825d4f73b3506c3c2a67babd6ca912f2 00:22:39.298 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:22:39.298 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.knF 00:22:39.298 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 358938bc7d572705c561b2f42c11d1bb825d4f73b3506c3c2a67babd6ca912f2 3 00:22:39.299 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 358938bc7d572705c561b2f42c11d1bb825d4f73b3506c3c2a67babd6ca912f2 3 00:22:39.299 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:39.299 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:39.299 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=358938bc7d572705c561b2f42c11d1bb825d4f73b3506c3c2a67babd6ca912f2 00:22:39.299 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:22:39.299 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:39.299 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.knF 00:22:39.299 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.knF 00:22:39.299 11:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.knF 00:22:39.299 11:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:22:39.299 11:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 661709 00:22:39.299 11:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 661709 ']' 00:22:39.299 11:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.299 11:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:39.299 11:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.299 11:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:39.299 11:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.mC9 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.C86 ]] 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.C86 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.xzV 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.u15 ]] 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.u15 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.RfS 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Sad ]] 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Sad 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Cjp 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.y8L ]] 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.y8L 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.knF 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:39.557 11:27:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:22:40.930 Waiting for block devices as requested 00:22:40.930 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:22:40.930 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:40.930 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:41.188 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:41.188 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:41.188 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:41.188 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:41.446 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:41.446 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:41.446 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:41.446 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:41.704 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:41.704 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:41.704 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:41.704 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:41.962 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:41.962 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:42.220 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:42.220 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:42.220 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:22:42.220 11:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:22:42.220 11:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:42.220 11:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:42.220 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:22:42.220 11:27:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:42.220 11:27:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:42.479 No valid GPT data, bailing 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:22:42.479 00:22:42.479 Discovery Log Number of Records 2, Generation counter 2 00:22:42.479 =====Discovery Log Entry 0====== 00:22:42.479 trtype: tcp 00:22:42.479 adrfam: ipv4 00:22:42.479 subtype: current discovery subsystem 00:22:42.479 treq: not specified, sq flow control disable supported 00:22:42.479 portid: 1 00:22:42.479 trsvcid: 4420 00:22:42.479 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:42.479 traddr: 10.0.0.1 00:22:42.479 eflags: none 00:22:42.479 sectype: none 00:22:42.479 =====Discovery Log Entry 1====== 00:22:42.479 trtype: tcp 00:22:42.479 adrfam: ipv4 00:22:42.479 subtype: nvme subsystem 00:22:42.479 treq: not specified, sq flow control disable supported 00:22:42.479 portid: 1 00:22:42.479 trsvcid: 4420 00:22:42.479 subnqn: nqn.2024-02.io.spdk:cnode0 00:22:42.479 traddr: 10.0.0.1 00:22:42.479 eflags: none 00:22:42.479 sectype: none 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM1YmQ3MjAwMzYzYTEzYzU0YjNhMjMwMWVlYzkwZGIwNjhlNzQ2YzMzOWYzYzM0qUrOeA==: 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM1YmQ3MjAwMzYzYTEzYzU0YjNhMjMwMWVlYzkwZGIwNjhlNzQ2YzMzOWYzYzM0qUrOeA==: 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: ]] 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:42.479 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:42.480 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:42.480 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:42.480 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:42.480 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:42.480 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:42.480 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:42.480 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:42.480 11:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.480 11:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.738 nvme0n1 00:22:42.738 11:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.738 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:42.738 11:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.738 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:42.738 11:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.738 11:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.738 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.738 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:42.738 11:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.738 11:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.738 11:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.738 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:42.738 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:42.738 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:42.738 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:22:42.738 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:42.738 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:42.738 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:42.738 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:42.738 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVjZmJmODYyMzI3YmRhZjdlZjdkNGU1OTBmMGIzMjBZIyJp: 00:22:42.738 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: 00:22:42.738 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:42.738 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:42.738 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVjZmJmODYyMzI3YmRhZjdlZjdkNGU1OTBmMGIzMjBZIyJp: 00:22:42.738 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: ]] 00:22:42.738 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: 00:22:42.738 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:22:42.738 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:42.738 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:42.739 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:42.739 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:42.739 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:42.739 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:42.739 11:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.739 11:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.739 11:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.739 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:42.739 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:42.739 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:42.739 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:42.739 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:42.739 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:42.739 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:42.739 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:42.739 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:42.739 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:42.739 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:42.739 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:42.739 11:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.739 11:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.739 nvme0n1 00:22:42.739 11:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.739 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:42.739 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:42.739 11:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.739 11:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.739 11:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.002 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.002 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:43.002 11:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.002 11:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.002 11:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.002 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:43.002 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:43.002 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:43.002 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:43.002 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:43.002 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:43.002 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM1YmQ3MjAwMzYzYTEzYzU0YjNhMjMwMWVlYzkwZGIwNjhlNzQ2YzMzOWYzYzM0qUrOeA==: 00:22:43.002 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: 00:22:43.002 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:43.002 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:43.002 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM1YmQ3MjAwMzYzYTEzYzU0YjNhMjMwMWVlYzkwZGIwNjhlNzQ2YzMzOWYzYzM0qUrOeA==: 00:22:43.002 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: ]] 00:22:43.002 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: 00:22:43.002 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:22:43.002 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:43.002 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:43.002 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:43.002 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:43.002 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:43.002 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:43.002 11:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.002 11:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.002 11:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.002 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:43.002 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:43.002 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:43.002 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:43.002 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:43.002 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:43.002 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:43.002 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:43.002 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:43.003 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:43.003 11:27:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:43.003 11:27:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.003 11:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.003 11:27:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.003 nvme0n1 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ0MTQ2N2EyMTViYzU0ZTIzYjc0MzkyM2IwMjlkM2YyR+Dn: 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ0MTQ2N2EyMTViYzU0ZTIzYjc0MzkyM2IwMjlkM2YyR+Dn: 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: ]] 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.003 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.310 nvme0n1 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2ZjOTRjYWY1YTZjZWI2YjI2NTYxYjA5YmQ3MDQ4YzAwNGM3NThmN2RlNDBlYjRkB+IQHg==: 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2ZjOTRjYWY1YTZjZWI2YjI2NTYxYjA5YmQ3MDQ4YzAwNGM3NThmN2RlNDBlYjRkB+IQHg==: 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: ]] 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.310 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.569 nvme0n1 00:22:43.569 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.569 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:43.569 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.569 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.569 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:43.569 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.569 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.569 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:43.569 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.569 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.569 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.569 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:43.569 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:22:43.569 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:43.569 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:43.569 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:43.569 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:43.569 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzU4OTM4YmM3ZDU3MjcwNWM1NjFiMmY0MmMxMWQxYmI4MjVkNGY3M2IzNTA2YzNjMmE2N2JhYmQ2Y2E5MTJmMmC8NWI=: 00:22:43.569 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:43.569 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:43.569 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:43.569 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzU4OTM4YmM3ZDU3MjcwNWM1NjFiMmY0MmMxMWQxYmI4MjVkNGY3M2IzNTA2YzNjMmE2N2JhYmQ2Y2E5MTJmMmC8NWI=: 00:22:43.569 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:43.569 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:22:43.569 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:43.569 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:43.569 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:43.569 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:43.569 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:43.569 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:43.569 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.569 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.569 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.569 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:43.569 11:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:43.569 11:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:43.569 11:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:43.569 11:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:43.569 11:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:43.570 11:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:43.570 11:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:43.570 11:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:43.570 11:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:43.570 11:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:43.570 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:43.570 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.570 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.828 nvme0n1 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVjZmJmODYyMzI3YmRhZjdlZjdkNGU1OTBmMGIzMjBZIyJp: 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVjZmJmODYyMzI3YmRhZjdlZjdkNGU1OTBmMGIzMjBZIyJp: 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: ]] 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.828 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.086 nvme0n1 00:22:44.086 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.086 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:44.086 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.086 11:27:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:44.086 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.086 11:27:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.086 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.086 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:44.086 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.087 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.087 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.087 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:44.087 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:22:44.087 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:44.087 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:44.087 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:44.087 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:44.087 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM1YmQ3MjAwMzYzYTEzYzU0YjNhMjMwMWVlYzkwZGIwNjhlNzQ2YzMzOWYzYzM0qUrOeA==: 00:22:44.087 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: 00:22:44.087 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:44.087 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:44.087 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM1YmQ3MjAwMzYzYTEzYzU0YjNhMjMwMWVlYzkwZGIwNjhlNzQ2YzMzOWYzYzM0qUrOeA==: 00:22:44.087 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: ]] 00:22:44.087 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: 00:22:44.087 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:22:44.087 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:44.087 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:44.087 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:44.087 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:44.087 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:44.087 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:44.087 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.087 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.087 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.087 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:44.087 11:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:44.087 11:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:44.087 11:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:44.087 11:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:44.087 11:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:44.087 11:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:44.087 11:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:44.087 11:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:44.087 11:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:44.087 11:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:44.087 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:44.087 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.087 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.345 nvme0n1 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ0MTQ2N2EyMTViYzU0ZTIzYjc0MzkyM2IwMjlkM2YyR+Dn: 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ0MTQ2N2EyMTViYzU0ZTIzYjc0MzkyM2IwMjlkM2YyR+Dn: 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: ]] 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.345 nvme0n1 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:44.345 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2ZjOTRjYWY1YTZjZWI2YjI2NTYxYjA5YmQ3MDQ4YzAwNGM3NThmN2RlNDBlYjRkB+IQHg==: 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2ZjOTRjYWY1YTZjZWI2YjI2NTYxYjA5YmQ3MDQ4YzAwNGM3NThmN2RlNDBlYjRkB+IQHg==: 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: ]] 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.603 nvme0n1 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.603 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:44.604 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.604 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.604 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:44.604 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.861 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.861 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:44.861 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.861 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.861 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.861 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:44.861 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:22:44.861 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:44.861 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:44.861 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:44.861 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:44.861 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzU4OTM4YmM3ZDU3MjcwNWM1NjFiMmY0MmMxMWQxYmI4MjVkNGY3M2IzNTA2YzNjMmE2N2JhYmQ2Y2E5MTJmMmC8NWI=: 00:22:44.861 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:44.861 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:44.861 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:44.861 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzU4OTM4YmM3ZDU3MjcwNWM1NjFiMmY0MmMxMWQxYmI4MjVkNGY3M2IzNTA2YzNjMmE2N2JhYmQ2Y2E5MTJmMmC8NWI=: 00:22:44.861 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:44.861 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:22:44.861 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:44.862 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:44.862 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:44.862 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:44.862 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:44.862 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:44.862 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.862 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.862 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.862 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:44.862 11:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:44.862 11:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:44.862 11:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:44.862 11:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:44.862 11:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:44.862 11:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:44.862 11:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:44.862 11:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:44.862 11:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:44.862 11:27:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:44.862 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:44.862 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.862 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.862 nvme0n1 00:22:44.862 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.862 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:44.862 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.862 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.862 11:27:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:44.862 11:27:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVjZmJmODYyMzI3YmRhZjdlZjdkNGU1OTBmMGIzMjBZIyJp: 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVjZmJmODYyMzI3YmRhZjdlZjdkNGU1OTBmMGIzMjBZIyJp: 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: ]] 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.120 11:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.378 nvme0n1 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM1YmQ3MjAwMzYzYTEzYzU0YjNhMjMwMWVlYzkwZGIwNjhlNzQ2YzMzOWYzYzM0qUrOeA==: 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM1YmQ3MjAwMzYzYTEzYzU0YjNhMjMwMWVlYzkwZGIwNjhlNzQ2YzMzOWYzYzM0qUrOeA==: 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: ]] 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.378 11:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.638 nvme0n1 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ0MTQ2N2EyMTViYzU0ZTIzYjc0MzkyM2IwMjlkM2YyR+Dn: 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ0MTQ2N2EyMTViYzU0ZTIzYjc0MzkyM2IwMjlkM2YyR+Dn: 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: ]] 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.638 11:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.896 nvme0n1 00:22:45.896 11:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.896 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:45.896 11:27:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:45.896 11:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.896 11:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.896 11:27:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.896 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.896 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:45.896 11:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.896 11:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.155 11:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.155 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:46.155 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:22:46.155 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:46.155 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:46.155 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:46.155 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:46.155 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2ZjOTRjYWY1YTZjZWI2YjI2NTYxYjA5YmQ3MDQ4YzAwNGM3NThmN2RlNDBlYjRkB+IQHg==: 00:22:46.155 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: 00:22:46.155 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:46.155 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:46.155 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2ZjOTRjYWY1YTZjZWI2YjI2NTYxYjA5YmQ3MDQ4YzAwNGM3NThmN2RlNDBlYjRkB+IQHg==: 00:22:46.155 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: ]] 00:22:46.155 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: 00:22:46.155 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:22:46.155 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:46.155 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:46.155 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:46.155 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:46.155 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:46.155 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:46.155 11:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.155 11:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.155 11:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.155 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:46.155 11:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:46.155 11:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:46.155 11:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:46.155 11:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:46.155 11:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:46.155 11:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:46.155 11:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:46.155 11:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:46.155 11:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:46.155 11:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:46.155 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:46.155 11:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.155 11:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.413 nvme0n1 00:22:46.413 11:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.413 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:46.413 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:46.413 11:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.413 11:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.413 11:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.413 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.413 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:46.413 11:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.413 11:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.414 11:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.414 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:46.414 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:22:46.414 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:46.414 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:46.414 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:46.414 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:46.414 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzU4OTM4YmM3ZDU3MjcwNWM1NjFiMmY0MmMxMWQxYmI4MjVkNGY3M2IzNTA2YzNjMmE2N2JhYmQ2Y2E5MTJmMmC8NWI=: 00:22:46.414 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:46.414 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:46.414 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:46.414 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzU4OTM4YmM3ZDU3MjcwNWM1NjFiMmY0MmMxMWQxYmI4MjVkNGY3M2IzNTA2YzNjMmE2N2JhYmQ2Y2E5MTJmMmC8NWI=: 00:22:46.414 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:46.414 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:22:46.414 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:46.414 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:46.414 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:46.414 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:46.414 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:46.414 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:46.414 11:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.414 11:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.414 11:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.414 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:46.414 11:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:46.414 11:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:46.414 11:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:46.414 11:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:46.414 11:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:46.414 11:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:46.414 11:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:46.414 11:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:46.414 11:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:46.414 11:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:46.414 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:46.414 11:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.414 11:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.672 nvme0n1 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVjZmJmODYyMzI3YmRhZjdlZjdkNGU1OTBmMGIzMjBZIyJp: 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVjZmJmODYyMzI3YmRhZjdlZjdkNGU1OTBmMGIzMjBZIyJp: 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: ]] 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.672 11:27:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.237 nvme0n1 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM1YmQ3MjAwMzYzYTEzYzU0YjNhMjMwMWVlYzkwZGIwNjhlNzQ2YzMzOWYzYzM0qUrOeA==: 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM1YmQ3MjAwMzYzYTEzYzU0YjNhMjMwMWVlYzkwZGIwNjhlNzQ2YzMzOWYzYzM0qUrOeA==: 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: ]] 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.237 11:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.802 nvme0n1 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ0MTQ2N2EyMTViYzU0ZTIzYjc0MzkyM2IwMjlkM2YyR+Dn: 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ0MTQ2N2EyMTViYzU0ZTIzYjc0MzkyM2IwMjlkM2YyR+Dn: 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: ]] 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.802 11:27:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.367 nvme0n1 00:22:48.367 11:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.367 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:48.367 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:48.367 11:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.367 11:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.367 11:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.367 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.367 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:48.367 11:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.367 11:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.367 11:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.367 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:48.367 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:22:48.367 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:48.367 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:48.367 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:48.367 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:48.367 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2ZjOTRjYWY1YTZjZWI2YjI2NTYxYjA5YmQ3MDQ4YzAwNGM3NThmN2RlNDBlYjRkB+IQHg==: 00:22:48.367 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: 00:22:48.367 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:48.367 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:48.367 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2ZjOTRjYWY1YTZjZWI2YjI2NTYxYjA5YmQ3MDQ4YzAwNGM3NThmN2RlNDBlYjRkB+IQHg==: 00:22:48.367 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: ]] 00:22:48.368 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: 00:22:48.368 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:22:48.368 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:48.368 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:48.368 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:48.368 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:48.368 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:48.368 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:48.368 11:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.368 11:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.368 11:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.368 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:48.368 11:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:48.368 11:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:48.368 11:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:48.368 11:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:48.368 11:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:48.368 11:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:48.368 11:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:48.368 11:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:48.368 11:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:48.368 11:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:48.368 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:48.368 11:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.368 11:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.933 nvme0n1 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzU4OTM4YmM3ZDU3MjcwNWM1NjFiMmY0MmMxMWQxYmI4MjVkNGY3M2IzNTA2YzNjMmE2N2JhYmQ2Y2E5MTJmMmC8NWI=: 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzU4OTM4YmM3ZDU3MjcwNWM1NjFiMmY0MmMxMWQxYmI4MjVkNGY3M2IzNTA2YzNjMmE2N2JhYmQ2Y2E5MTJmMmC8NWI=: 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.933 11:27:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.499 nvme0n1 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVjZmJmODYyMzI3YmRhZjdlZjdkNGU1OTBmMGIzMjBZIyJp: 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVjZmJmODYyMzI3YmRhZjdlZjdkNGU1OTBmMGIzMjBZIyJp: 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: ]] 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.499 11:27:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.433 nvme0n1 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM1YmQ3MjAwMzYzYTEzYzU0YjNhMjMwMWVlYzkwZGIwNjhlNzQ2YzMzOWYzYzM0qUrOeA==: 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM1YmQ3MjAwMzYzYTEzYzU0YjNhMjMwMWVlYzkwZGIwNjhlNzQ2YzMzOWYzYzM0qUrOeA==: 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: ]] 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:50.433 11:27:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:50.434 11:27:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:50.434 11:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.434 11:27:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.367 nvme0n1 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ0MTQ2N2EyMTViYzU0ZTIzYjc0MzkyM2IwMjlkM2YyR+Dn: 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ0MTQ2N2EyMTViYzU0ZTIzYjc0MzkyM2IwMjlkM2YyR+Dn: 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: ]] 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.367 11:27:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.300 nvme0n1 00:22:52.300 11:27:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.300 11:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:52.300 11:27:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.300 11:27:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.300 11:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:52.300 11:27:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.300 11:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.300 11:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:52.300 11:27:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.300 11:27:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.300 11:27:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.300 11:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:52.300 11:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:22:52.300 11:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:52.300 11:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:52.300 11:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:52.300 11:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:52.300 11:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2ZjOTRjYWY1YTZjZWI2YjI2NTYxYjA5YmQ3MDQ4YzAwNGM3NThmN2RlNDBlYjRkB+IQHg==: 00:22:52.300 11:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: 00:22:52.300 11:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:52.300 11:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:52.300 11:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2ZjOTRjYWY1YTZjZWI2YjI2NTYxYjA5YmQ3MDQ4YzAwNGM3NThmN2RlNDBlYjRkB+IQHg==: 00:22:52.300 11:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: ]] 00:22:52.300 11:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: 00:22:52.300 11:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:22:52.300 11:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:52.300 11:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:52.300 11:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:52.300 11:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:52.301 11:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:52.301 11:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:52.301 11:27:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.301 11:27:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.301 11:27:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.301 11:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:52.301 11:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:52.301 11:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:52.301 11:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:52.301 11:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:52.301 11:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:52.301 11:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:52.301 11:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:52.301 11:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:52.301 11:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:52.301 11:27:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:52.301 11:27:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:52.301 11:27:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.301 11:27:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.231 nvme0n1 00:22:53.231 11:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.231 11:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:53.231 11:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.231 11:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.231 11:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:53.231 11:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.232 11:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.232 11:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:53.232 11:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.232 11:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.232 11:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.232 11:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:53.232 11:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:22:53.232 11:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:53.232 11:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:53.232 11:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:53.232 11:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:53.232 11:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzU4OTM4YmM3ZDU3MjcwNWM1NjFiMmY0MmMxMWQxYmI4MjVkNGY3M2IzNTA2YzNjMmE2N2JhYmQ2Y2E5MTJmMmC8NWI=: 00:22:53.232 11:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:53.232 11:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:53.232 11:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:53.232 11:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzU4OTM4YmM3ZDU3MjcwNWM1NjFiMmY0MmMxMWQxYmI4MjVkNGY3M2IzNTA2YzNjMmE2N2JhYmQ2Y2E5MTJmMmC8NWI=: 00:22:53.232 11:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:53.232 11:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:22:53.232 11:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:53.232 11:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:53.232 11:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:53.232 11:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:53.232 11:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:53.232 11:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:53.232 11:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.232 11:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.232 11:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.232 11:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:53.232 11:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:53.232 11:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:53.232 11:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:53.232 11:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:53.232 11:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:53.232 11:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:53.232 11:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:53.232 11:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:53.232 11:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:53.232 11:27:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:53.232 11:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:53.232 11:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.232 11:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.166 nvme0n1 00:22:54.166 11:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.166 11:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:54.166 11:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.166 11:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.166 11:27:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:54.166 11:27:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.166 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.166 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:54.166 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.166 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.166 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.166 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:54.166 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:54.166 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:54.166 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:22:54.166 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:54.166 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:54.166 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:54.166 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:54.166 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVjZmJmODYyMzI3YmRhZjdlZjdkNGU1OTBmMGIzMjBZIyJp: 00:22:54.166 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: 00:22:54.166 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:54.166 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:54.166 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVjZmJmODYyMzI3YmRhZjdlZjdkNGU1OTBmMGIzMjBZIyJp: 00:22:54.166 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: ]] 00:22:54.166 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.167 nvme0n1 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM1YmQ3MjAwMzYzYTEzYzU0YjNhMjMwMWVlYzkwZGIwNjhlNzQ2YzMzOWYzYzM0qUrOeA==: 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM1YmQ3MjAwMzYzYTEzYzU0YjNhMjMwMWVlYzkwZGIwNjhlNzQ2YzMzOWYzYzM0qUrOeA==: 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: ]] 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.167 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.426 nvme0n1 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ0MTQ2N2EyMTViYzU0ZTIzYjc0MzkyM2IwMjlkM2YyR+Dn: 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ0MTQ2N2EyMTViYzU0ZTIzYjc0MzkyM2IwMjlkM2YyR+Dn: 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: ]] 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.426 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.684 nvme0n1 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2ZjOTRjYWY1YTZjZWI2YjI2NTYxYjA5YmQ3MDQ4YzAwNGM3NThmN2RlNDBlYjRkB+IQHg==: 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2ZjOTRjYWY1YTZjZWI2YjI2NTYxYjA5YmQ3MDQ4YzAwNGM3NThmN2RlNDBlYjRkB+IQHg==: 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: ]] 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.684 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.942 nvme0n1 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzU4OTM4YmM3ZDU3MjcwNWM1NjFiMmY0MmMxMWQxYmI4MjVkNGY3M2IzNTA2YzNjMmE2N2JhYmQ2Y2E5MTJmMmC8NWI=: 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzU4OTM4YmM3ZDU3MjcwNWM1NjFiMmY0MmMxMWQxYmI4MjVkNGY3M2IzNTA2YzNjMmE2N2JhYmQ2Y2E5MTJmMmC8NWI=: 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.943 11:27:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.201 nvme0n1 00:22:55.201 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.201 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:55.201 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVjZmJmODYyMzI3YmRhZjdlZjdkNGU1OTBmMGIzMjBZIyJp: 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVjZmJmODYyMzI3YmRhZjdlZjdkNGU1OTBmMGIzMjBZIyJp: 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: ]] 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.202 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.460 nvme0n1 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM1YmQ3MjAwMzYzYTEzYzU0YjNhMjMwMWVlYzkwZGIwNjhlNzQ2YzMzOWYzYzM0qUrOeA==: 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM1YmQ3MjAwMzYzYTEzYzU0YjNhMjMwMWVlYzkwZGIwNjhlNzQ2YzMzOWYzYzM0qUrOeA==: 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: ]] 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.461 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.721 nvme0n1 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ0MTQ2N2EyMTViYzU0ZTIzYjc0MzkyM2IwMjlkM2YyR+Dn: 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ0MTQ2N2EyMTViYzU0ZTIzYjc0MzkyM2IwMjlkM2YyR+Dn: 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: ]] 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.721 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.980 nvme0n1 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2ZjOTRjYWY1YTZjZWI2YjI2NTYxYjA5YmQ3MDQ4YzAwNGM3NThmN2RlNDBlYjRkB+IQHg==: 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2ZjOTRjYWY1YTZjZWI2YjI2NTYxYjA5YmQ3MDQ4YzAwNGM3NThmN2RlNDBlYjRkB+IQHg==: 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: ]] 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:55.980 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.981 11:27:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.239 nvme0n1 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzU4OTM4YmM3ZDU3MjcwNWM1NjFiMmY0MmMxMWQxYmI4MjVkNGY3M2IzNTA2YzNjMmE2N2JhYmQ2Y2E5MTJmMmC8NWI=: 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzU4OTM4YmM3ZDU3MjcwNWM1NjFiMmY0MmMxMWQxYmI4MjVkNGY3M2IzNTA2YzNjMmE2N2JhYmQ2Y2E5MTJmMmC8NWI=: 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.239 11:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.497 nvme0n1 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVjZmJmODYyMzI3YmRhZjdlZjdkNGU1OTBmMGIzMjBZIyJp: 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVjZmJmODYyMzI3YmRhZjdlZjdkNGU1OTBmMGIzMjBZIyJp: 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: ]] 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.497 11:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.761 nvme0n1 00:22:56.761 11:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.761 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:56.761 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:56.761 11:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.761 11:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.761 11:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.761 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.761 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:56.761 11:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.761 11:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.761 11:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.761 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:56.761 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:22:56.761 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:56.761 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:56.761 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:56.761 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:56.761 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM1YmQ3MjAwMzYzYTEzYzU0YjNhMjMwMWVlYzkwZGIwNjhlNzQ2YzMzOWYzYzM0qUrOeA==: 00:22:56.761 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: 00:22:56.761 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:56.761 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:56.761 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM1YmQ3MjAwMzYzYTEzYzU0YjNhMjMwMWVlYzkwZGIwNjhlNzQ2YzMzOWYzYzM0qUrOeA==: 00:22:56.761 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: ]] 00:22:56.761 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: 00:22:56.761 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:22:56.761 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:56.761 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:56.761 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:56.762 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:56.762 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:56.762 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:56.762 11:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.762 11:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.762 11:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.762 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:56.762 11:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:56.762 11:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:56.762 11:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:56.762 11:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:56.762 11:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:56.762 11:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:56.762 11:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:56.762 11:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:56.762 11:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:56.762 11:27:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:56.762 11:27:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:56.762 11:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.762 11:27:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.023 nvme0n1 00:22:57.023 11:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.023 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:57.023 11:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.023 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:57.023 11:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.023 11:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.280 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.280 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:57.280 11:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.281 11:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.281 11:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.281 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:57.281 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:22:57.281 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:57.281 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:57.281 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:57.281 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:57.281 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ0MTQ2N2EyMTViYzU0ZTIzYjc0MzkyM2IwMjlkM2YyR+Dn: 00:22:57.281 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: 00:22:57.281 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:57.281 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:57.281 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ0MTQ2N2EyMTViYzU0ZTIzYjc0MzkyM2IwMjlkM2YyR+Dn: 00:22:57.281 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: ]] 00:22:57.281 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: 00:22:57.281 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:22:57.281 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:57.281 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:57.281 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:57.281 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:57.281 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:57.281 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:57.281 11:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.281 11:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.281 11:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.281 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:57.281 11:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:57.281 11:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:57.281 11:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:57.281 11:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:57.281 11:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:57.281 11:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:57.281 11:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:57.281 11:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:57.281 11:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:57.281 11:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:57.281 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:57.281 11:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.281 11:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.538 nvme0n1 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2ZjOTRjYWY1YTZjZWI2YjI2NTYxYjA5YmQ3MDQ4YzAwNGM3NThmN2RlNDBlYjRkB+IQHg==: 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2ZjOTRjYWY1YTZjZWI2YjI2NTYxYjA5YmQ3MDQ4YzAwNGM3NThmN2RlNDBlYjRkB+IQHg==: 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: ]] 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.538 11:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.796 nvme0n1 00:22:57.796 11:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.796 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:57.796 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:57.796 11:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.796 11:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.796 11:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.796 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.796 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:57.796 11:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.796 11:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.796 11:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.796 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:57.797 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:22:57.797 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:57.797 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:57.797 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:57.797 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:57.797 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzU4OTM4YmM3ZDU3MjcwNWM1NjFiMmY0MmMxMWQxYmI4MjVkNGY3M2IzNTA2YzNjMmE2N2JhYmQ2Y2E5MTJmMmC8NWI=: 00:22:57.797 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:57.797 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:57.797 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:57.797 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzU4OTM4YmM3ZDU3MjcwNWM1NjFiMmY0MmMxMWQxYmI4MjVkNGY3M2IzNTA2YzNjMmE2N2JhYmQ2Y2E5MTJmMmC8NWI=: 00:22:57.797 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:57.797 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:22:57.797 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:57.797 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:57.797 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:57.797 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:57.797 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:57.797 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:57.797 11:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.797 11:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.797 11:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.797 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:57.797 11:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:57.797 11:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:57.797 11:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:57.797 11:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:57.797 11:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:57.797 11:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:57.797 11:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:57.797 11:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:57.797 11:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:57.797 11:27:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:57.797 11:27:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:57.797 11:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.797 11:27:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.055 nvme0n1 00:22:58.055 11:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.055 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:58.055 11:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.055 11:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.055 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:58.055 11:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.313 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.313 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:58.313 11:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.313 11:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.313 11:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.313 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:58.313 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:58.313 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:22:58.313 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:58.313 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:58.313 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:58.313 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:58.313 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVjZmJmODYyMzI3YmRhZjdlZjdkNGU1OTBmMGIzMjBZIyJp: 00:22:58.313 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: 00:22:58.313 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:58.313 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:58.313 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVjZmJmODYyMzI3YmRhZjdlZjdkNGU1OTBmMGIzMjBZIyJp: 00:22:58.313 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: ]] 00:22:58.313 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: 00:22:58.313 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:22:58.313 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:58.313 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:58.313 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:58.313 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:58.313 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:58.313 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:58.313 11:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.313 11:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.313 11:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.313 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:58.313 11:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:58.313 11:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:58.313 11:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:58.313 11:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:58.313 11:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:58.313 11:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:58.313 11:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:58.313 11:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:58.314 11:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:58.314 11:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:58.314 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:58.314 11:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.314 11:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.880 nvme0n1 00:22:58.880 11:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.880 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:58.880 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:58.880 11:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.880 11:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.880 11:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.880 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.880 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:58.880 11:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.880 11:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.880 11:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.880 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:58.880 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:22:58.880 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:58.880 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:58.880 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:58.880 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:58.880 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM1YmQ3MjAwMzYzYTEzYzU0YjNhMjMwMWVlYzkwZGIwNjhlNzQ2YzMzOWYzYzM0qUrOeA==: 00:22:58.880 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: 00:22:58.880 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:58.880 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:58.880 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM1YmQ3MjAwMzYzYTEzYzU0YjNhMjMwMWVlYzkwZGIwNjhlNzQ2YzMzOWYzYzM0qUrOeA==: 00:22:58.880 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: ]] 00:22:58.880 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: 00:22:58.880 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:22:58.880 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:58.880 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:58.880 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:58.880 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:58.880 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:58.880 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:58.880 11:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.880 11:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.880 11:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.880 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:58.880 11:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:58.880 11:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:58.881 11:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:58.881 11:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:58.881 11:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:58.881 11:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:58.881 11:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:58.881 11:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:58.881 11:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:58.881 11:27:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:58.881 11:27:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:58.881 11:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.881 11:27:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.138 nvme0n1 00:22:59.138 11:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.396 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:59.396 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:59.396 11:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ0MTQ2N2EyMTViYzU0ZTIzYjc0MzkyM2IwMjlkM2YyR+Dn: 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ0MTQ2N2EyMTViYzU0ZTIzYjc0MzkyM2IwMjlkM2YyR+Dn: 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: ]] 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.397 11:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.003 nvme0n1 00:23:00.003 11:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.003 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:00.003 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:00.003 11:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.003 11:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.003 11:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.003 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.003 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:00.003 11:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.003 11:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.003 11:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.003 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:00.003 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:23:00.003 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:00.003 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:00.003 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:00.003 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:00.004 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2ZjOTRjYWY1YTZjZWI2YjI2NTYxYjA5YmQ3MDQ4YzAwNGM3NThmN2RlNDBlYjRkB+IQHg==: 00:23:00.004 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: 00:23:00.004 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:00.004 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:00.004 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2ZjOTRjYWY1YTZjZWI2YjI2NTYxYjA5YmQ3MDQ4YzAwNGM3NThmN2RlNDBlYjRkB+IQHg==: 00:23:00.004 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: ]] 00:23:00.004 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: 00:23:00.004 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:23:00.004 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:00.004 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:00.004 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:00.004 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:00.004 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:00.004 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:00.004 11:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.004 11:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.004 11:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.004 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:00.004 11:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:00.004 11:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:00.004 11:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:00.004 11:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:00.004 11:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:00.004 11:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:00.004 11:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:00.004 11:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:00.004 11:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:00.004 11:27:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:00.004 11:27:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:00.004 11:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.004 11:27:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.260 nvme0n1 00:23:00.260 11:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.260 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:00.260 11:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.260 11:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.260 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:00.260 11:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.517 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.517 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:00.517 11:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.517 11:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.517 11:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.517 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:00.517 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:23:00.517 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:00.517 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:00.517 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:00.517 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:00.517 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzU4OTM4YmM3ZDU3MjcwNWM1NjFiMmY0MmMxMWQxYmI4MjVkNGY3M2IzNTA2YzNjMmE2N2JhYmQ2Y2E5MTJmMmC8NWI=: 00:23:00.517 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:00.517 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:00.517 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:00.517 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzU4OTM4YmM3ZDU3MjcwNWM1NjFiMmY0MmMxMWQxYmI4MjVkNGY3M2IzNTA2YzNjMmE2N2JhYmQ2Y2E5MTJmMmC8NWI=: 00:23:00.517 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:00.517 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:23:00.517 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:00.517 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:00.517 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:00.517 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:00.517 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:00.517 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:00.517 11:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.517 11:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.517 11:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.517 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:00.517 11:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:00.517 11:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:00.517 11:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:00.517 11:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:00.517 11:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:00.517 11:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:00.517 11:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:00.517 11:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:00.517 11:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:00.517 11:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:00.517 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:00.517 11:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.517 11:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.080 nvme0n1 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVjZmJmODYyMzI3YmRhZjdlZjdkNGU1OTBmMGIzMjBZIyJp: 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVjZmJmODYyMzI3YmRhZjdlZjdkNGU1OTBmMGIzMjBZIyJp: 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: ]] 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.080 11:27:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.009 nvme0n1 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM1YmQ3MjAwMzYzYTEzYzU0YjNhMjMwMWVlYzkwZGIwNjhlNzQ2YzMzOWYzYzM0qUrOeA==: 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM1YmQ3MjAwMzYzYTEzYzU0YjNhMjMwMWVlYzkwZGIwNjhlNzQ2YzMzOWYzYzM0qUrOeA==: 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: ]] 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.009 11:27:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.940 nvme0n1 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ0MTQ2N2EyMTViYzU0ZTIzYjc0MzkyM2IwMjlkM2YyR+Dn: 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ0MTQ2N2EyMTViYzU0ZTIzYjc0MzkyM2IwMjlkM2YyR+Dn: 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: ]] 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.940 11:27:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.503 nvme0n1 00:23:03.503 11:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.503 11:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:03.503 11:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.503 11:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.503 11:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2ZjOTRjYWY1YTZjZWI2YjI2NTYxYjA5YmQ3MDQ4YzAwNGM3NThmN2RlNDBlYjRkB+IQHg==: 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2ZjOTRjYWY1YTZjZWI2YjI2NTYxYjA5YmQ3MDQ4YzAwNGM3NThmN2RlNDBlYjRkB+IQHg==: 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: ]] 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.760 11:27:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.693 nvme0n1 00:23:04.693 11:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.693 11:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:04.693 11:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:04.693 11:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.693 11:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.693 11:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.693 11:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.693 11:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:04.693 11:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.693 11:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.693 11:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.693 11:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:04.693 11:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:23:04.693 11:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:04.693 11:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:04.693 11:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:04.693 11:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:04.693 11:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzU4OTM4YmM3ZDU3MjcwNWM1NjFiMmY0MmMxMWQxYmI4MjVkNGY3M2IzNTA2YzNjMmE2N2JhYmQ2Y2E5MTJmMmC8NWI=: 00:23:04.693 11:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:04.693 11:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:04.693 11:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:04.694 11:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzU4OTM4YmM3ZDU3MjcwNWM1NjFiMmY0MmMxMWQxYmI4MjVkNGY3M2IzNTA2YzNjMmE2N2JhYmQ2Y2E5MTJmMmC8NWI=: 00:23:04.694 11:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:04.694 11:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:23:04.694 11:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:04.694 11:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:04.694 11:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:04.694 11:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:04.694 11:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:04.694 11:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:04.694 11:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.694 11:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.694 11:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.694 11:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:04.694 11:27:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:04.694 11:27:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:04.694 11:27:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:04.694 11:27:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:04.694 11:27:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:04.694 11:27:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:04.694 11:27:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:04.694 11:27:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:04.694 11:27:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:04.694 11:27:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:04.694 11:27:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:04.694 11:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.694 11:27:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.626 nvme0n1 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVjZmJmODYyMzI3YmRhZjdlZjdkNGU1OTBmMGIzMjBZIyJp: 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVjZmJmODYyMzI3YmRhZjdlZjdkNGU1OTBmMGIzMjBZIyJp: 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: ]] 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.626 nvme0n1 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM1YmQ3MjAwMzYzYTEzYzU0YjNhMjMwMWVlYzkwZGIwNjhlNzQ2YzMzOWYzYzM0qUrOeA==: 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM1YmQ3MjAwMzYzYTEzYzU0YjNhMjMwMWVlYzkwZGIwNjhlNzQ2YzMzOWYzYzM0qUrOeA==: 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: ]] 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.626 11:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.884 nvme0n1 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ0MTQ2N2EyMTViYzU0ZTIzYjc0MzkyM2IwMjlkM2YyR+Dn: 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ0MTQ2N2EyMTViYzU0ZTIzYjc0MzkyM2IwMjlkM2YyR+Dn: 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: ]] 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:05.884 11:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:05.885 11:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.885 11:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.143 nvme0n1 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2ZjOTRjYWY1YTZjZWI2YjI2NTYxYjA5YmQ3MDQ4YzAwNGM3NThmN2RlNDBlYjRkB+IQHg==: 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2ZjOTRjYWY1YTZjZWI2YjI2NTYxYjA5YmQ3MDQ4YzAwNGM3NThmN2RlNDBlYjRkB+IQHg==: 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: ]] 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.143 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.401 nvme0n1 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzU4OTM4YmM3ZDU3MjcwNWM1NjFiMmY0MmMxMWQxYmI4MjVkNGY3M2IzNTA2YzNjMmE2N2JhYmQ2Y2E5MTJmMmC8NWI=: 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzU4OTM4YmM3ZDU3MjcwNWM1NjFiMmY0MmMxMWQxYmI4MjVkNGY3M2IzNTA2YzNjMmE2N2JhYmQ2Y2E5MTJmMmC8NWI=: 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.401 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.659 nvme0n1 00:23:06.659 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.659 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.659 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.659 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.659 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:06.659 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.659 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.659 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.659 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.659 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.659 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.659 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:06.659 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:06.659 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:23:06.659 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:06.659 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:06.659 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:06.659 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:06.659 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVjZmJmODYyMzI3YmRhZjdlZjdkNGU1OTBmMGIzMjBZIyJp: 00:23:06.659 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: 00:23:06.659 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:06.659 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:06.659 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVjZmJmODYyMzI3YmRhZjdlZjdkNGU1OTBmMGIzMjBZIyJp: 00:23:06.659 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: ]] 00:23:06.659 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: 00:23:06.659 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:23:06.659 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:06.659 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:06.659 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:06.659 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:06.659 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:06.659 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:06.659 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.659 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.659 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.659 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:06.659 11:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:06.660 11:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:06.660 11:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:06.660 11:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.660 11:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.660 11:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:06.660 11:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:06.660 11:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:06.660 11:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:06.660 11:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:06.660 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:06.660 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.660 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.917 nvme0n1 00:23:06.917 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.917 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.917 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:06.917 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.917 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.917 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.917 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.917 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.917 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.917 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.917 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.917 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:06.917 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:23:06.917 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:06.917 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:06.917 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:06.917 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:06.917 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM1YmQ3MjAwMzYzYTEzYzU0YjNhMjMwMWVlYzkwZGIwNjhlNzQ2YzMzOWYzYzM0qUrOeA==: 00:23:06.917 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: 00:23:06.917 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:06.917 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:06.917 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM1YmQ3MjAwMzYzYTEzYzU0YjNhMjMwMWVlYzkwZGIwNjhlNzQ2YzMzOWYzYzM0qUrOeA==: 00:23:06.918 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: ]] 00:23:06.918 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: 00:23:06.918 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:23:06.918 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:06.918 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:06.918 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:06.918 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:06.918 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:06.918 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:06.918 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.918 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.918 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.918 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:06.918 11:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:06.918 11:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:06.918 11:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:06.918 11:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.918 11:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.918 11:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:06.918 11:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:06.918 11:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:06.918 11:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:06.918 11:27:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:06.918 11:27:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:06.918 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.918 11:27:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.175 nvme0n1 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ0MTQ2N2EyMTViYzU0ZTIzYjc0MzkyM2IwMjlkM2YyR+Dn: 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ0MTQ2N2EyMTViYzU0ZTIzYjc0MzkyM2IwMjlkM2YyR+Dn: 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: ]] 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.175 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.432 nvme0n1 00:23:07.432 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.432 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.432 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.432 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.432 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:07.432 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.432 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.432 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.432 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.432 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.432 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.432 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:07.432 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:23:07.432 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:07.432 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:07.432 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:07.432 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:07.432 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2ZjOTRjYWY1YTZjZWI2YjI2NTYxYjA5YmQ3MDQ4YzAwNGM3NThmN2RlNDBlYjRkB+IQHg==: 00:23:07.432 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: 00:23:07.432 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:07.432 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:07.432 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2ZjOTRjYWY1YTZjZWI2YjI2NTYxYjA5YmQ3MDQ4YzAwNGM3NThmN2RlNDBlYjRkB+IQHg==: 00:23:07.432 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: ]] 00:23:07.432 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: 00:23:07.432 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:23:07.432 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:07.432 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:07.432 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:07.432 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:07.432 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:07.432 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:07.432 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.432 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.432 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.432 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:07.432 11:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:07.433 11:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:07.433 11:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:07.433 11:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.433 11:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.433 11:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:07.433 11:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:07.433 11:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:07.433 11:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:07.433 11:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:07.433 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:07.433 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.433 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.690 nvme0n1 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzU4OTM4YmM3ZDU3MjcwNWM1NjFiMmY0MmMxMWQxYmI4MjVkNGY3M2IzNTA2YzNjMmE2N2JhYmQ2Y2E5MTJmMmC8NWI=: 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzU4OTM4YmM3ZDU3MjcwNWM1NjFiMmY0MmMxMWQxYmI4MjVkNGY3M2IzNTA2YzNjMmE2N2JhYmQ2Y2E5MTJmMmC8NWI=: 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.690 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.947 nvme0n1 00:23:07.947 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.947 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.947 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.947 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.947 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:07.947 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.947 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.947 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.947 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.947 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.947 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.947 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:07.947 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:07.947 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:23:07.947 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:07.947 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:07.947 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:07.947 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:07.947 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVjZmJmODYyMzI3YmRhZjdlZjdkNGU1OTBmMGIzMjBZIyJp: 00:23:07.947 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: 00:23:07.947 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:07.947 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:07.947 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVjZmJmODYyMzI3YmRhZjdlZjdkNGU1OTBmMGIzMjBZIyJp: 00:23:07.947 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: ]] 00:23:07.947 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: 00:23:07.947 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:23:07.947 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:07.947 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:07.947 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:07.947 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:07.947 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:07.947 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:07.947 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.947 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.947 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.947 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:07.947 11:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:07.947 11:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:07.947 11:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:07.947 11:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.947 11:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.948 11:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:07.948 11:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:07.948 11:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:07.948 11:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:07.948 11:27:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:07.948 11:27:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:07.948 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.948 11:27:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.205 nvme0n1 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM1YmQ3MjAwMzYzYTEzYzU0YjNhMjMwMWVlYzkwZGIwNjhlNzQ2YzMzOWYzYzM0qUrOeA==: 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM1YmQ3MjAwMzYzYTEzYzU0YjNhMjMwMWVlYzkwZGIwNjhlNzQ2YzMzOWYzYzM0qUrOeA==: 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: ]] 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.205 11:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.464 nvme0n1 00:23:08.464 11:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.464 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:08.464 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:08.464 11:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.464 11:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.464 11:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.464 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.464 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:08.464 11:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.464 11:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.464 11:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.464 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:08.464 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:23:08.464 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:08.464 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:08.464 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:08.464 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:08.464 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ0MTQ2N2EyMTViYzU0ZTIzYjc0MzkyM2IwMjlkM2YyR+Dn: 00:23:08.464 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: 00:23:08.464 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:08.464 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:08.464 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ0MTQ2N2EyMTViYzU0ZTIzYjc0MzkyM2IwMjlkM2YyR+Dn: 00:23:08.464 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: ]] 00:23:08.464 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: 00:23:08.464 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:23:08.464 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:08.464 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:08.464 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:08.464 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:08.464 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:08.464 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:08.464 11:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.464 11:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.723 11:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.723 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:08.723 11:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:08.723 11:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:08.723 11:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:08.723 11:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:08.723 11:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:08.723 11:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:08.723 11:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:08.723 11:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:08.723 11:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:08.723 11:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:08.723 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:08.723 11:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.723 11:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.981 nvme0n1 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2ZjOTRjYWY1YTZjZWI2YjI2NTYxYjA5YmQ3MDQ4YzAwNGM3NThmN2RlNDBlYjRkB+IQHg==: 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2ZjOTRjYWY1YTZjZWI2YjI2NTYxYjA5YmQ3MDQ4YzAwNGM3NThmN2RlNDBlYjRkB+IQHg==: 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: ]] 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.981 11:27:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.239 nvme0n1 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzU4OTM4YmM3ZDU3MjcwNWM1NjFiMmY0MmMxMWQxYmI4MjVkNGY3M2IzNTA2YzNjMmE2N2JhYmQ2Y2E5MTJmMmC8NWI=: 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzU4OTM4YmM3ZDU3MjcwNWM1NjFiMmY0MmMxMWQxYmI4MjVkNGY3M2IzNTA2YzNjMmE2N2JhYmQ2Y2E5MTJmMmC8NWI=: 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.239 11:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.497 nvme0n1 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVjZmJmODYyMzI3YmRhZjdlZjdkNGU1OTBmMGIzMjBZIyJp: 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVjZmJmODYyMzI3YmRhZjdlZjdkNGU1OTBmMGIzMjBZIyJp: 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: ]] 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.497 11:27:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.063 nvme0n1 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM1YmQ3MjAwMzYzYTEzYzU0YjNhMjMwMWVlYzkwZGIwNjhlNzQ2YzMzOWYzYzM0qUrOeA==: 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM1YmQ3MjAwMzYzYTEzYzU0YjNhMjMwMWVlYzkwZGIwNjhlNzQ2YzMzOWYzYzM0qUrOeA==: 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: ]] 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.064 11:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.629 nvme0n1 00:23:10.629 11:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.629 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:10.629 11:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.629 11:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.629 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:10.629 11:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.629 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.629 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:10.629 11:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.629 11:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.629 11:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.629 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:10.629 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:23:10.629 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:10.629 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:10.630 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:10.630 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:10.630 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ0MTQ2N2EyMTViYzU0ZTIzYjc0MzkyM2IwMjlkM2YyR+Dn: 00:23:10.630 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: 00:23:10.630 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:10.630 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:10.630 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ0MTQ2N2EyMTViYzU0ZTIzYjc0MzkyM2IwMjlkM2YyR+Dn: 00:23:10.630 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: ]] 00:23:10.630 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: 00:23:10.630 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:23:10.630 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:10.630 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:10.630 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:10.630 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:10.630 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:10.630 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:10.630 11:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.630 11:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.630 11:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.630 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:10.630 11:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:10.630 11:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:10.630 11:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:10.630 11:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:10.630 11:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:10.630 11:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:10.630 11:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:10.630 11:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:10.630 11:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:10.630 11:27:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:10.630 11:27:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:10.630 11:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.630 11:27:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.196 nvme0n1 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2ZjOTRjYWY1YTZjZWI2YjI2NTYxYjA5YmQ3MDQ4YzAwNGM3NThmN2RlNDBlYjRkB+IQHg==: 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2ZjOTRjYWY1YTZjZWI2YjI2NTYxYjA5YmQ3MDQ4YzAwNGM3NThmN2RlNDBlYjRkB+IQHg==: 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: ]] 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.196 11:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.762 nvme0n1 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzU4OTM4YmM3ZDU3MjcwNWM1NjFiMmY0MmMxMWQxYmI4MjVkNGY3M2IzNTA2YzNjMmE2N2JhYmQ2Y2E5MTJmMmC8NWI=: 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzU4OTM4YmM3ZDU3MjcwNWM1NjFiMmY0MmMxMWQxYmI4MjVkNGY3M2IzNTA2YzNjMmE2N2JhYmQ2Y2E5MTJmMmC8NWI=: 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.762 11:27:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.329 nvme0n1 00:23:12.329 11:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.329 11:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.329 11:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.329 11:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.329 11:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.329 11:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.329 11:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.329 11:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.329 11:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.329 11:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.329 11:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.329 11:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:12.329 11:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.329 11:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:23:12.329 11:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.329 11:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:12.329 11:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:12.329 11:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:12.329 11:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVjZmJmODYyMzI3YmRhZjdlZjdkNGU1OTBmMGIzMjBZIyJp: 00:23:12.329 11:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: 00:23:12.329 11:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:12.329 11:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:12.329 11:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVjZmJmODYyMzI3YmRhZjdlZjdkNGU1OTBmMGIzMjBZIyJp: 00:23:12.329 11:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: ]] 00:23:12.329 11:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjQ5MGFkN2QyNDA2OGMyMDI4ZWUwNzliYjA1NzY2ODRmZDUzN2ViZjFiZTgxYmMyYmMzYjViY2Q0NzRhYzUwZug4RyA=: 00:23:12.329 11:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:23:12.329 11:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.329 11:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:12.329 11:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:12.329 11:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:12.329 11:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.329 11:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:12.329 11:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.329 11:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.329 11:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.329 11:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.329 11:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:12.329 11:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:12.329 11:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:12.330 11:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.330 11:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.330 11:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:12.330 11:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.330 11:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:12.330 11:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:12.330 11:27:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:12.330 11:27:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:12.330 11:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.330 11:27:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.263 nvme0n1 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM1YmQ3MjAwMzYzYTEzYzU0YjNhMjMwMWVlYzkwZGIwNjhlNzQ2YzMzOWYzYzM0qUrOeA==: 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM1YmQ3MjAwMzYzYTEzYzU0YjNhMjMwMWVlYzkwZGIwNjhlNzQ2YzMzOWYzYzM0qUrOeA==: 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: ]] 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.263 11:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.197 nvme0n1 00:23:14.197 11:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.197 11:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.197 11:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.197 11:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.197 11:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.197 11:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.197 11:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.197 11:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.197 11:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.197 11:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.197 11:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.197 11:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.197 11:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:23:14.197 11:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.197 11:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:14.197 11:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:14.197 11:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:14.197 11:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWQ0MTQ2N2EyMTViYzU0ZTIzYjc0MzkyM2IwMjlkM2YyR+Dn: 00:23:14.197 11:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: 00:23:14.197 11:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:14.197 11:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:14.197 11:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWQ0MTQ2N2EyMTViYzU0ZTIzYjc0MzkyM2IwMjlkM2YyR+Dn: 00:23:14.197 11:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: ]] 00:23:14.197 11:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzdmMzBmNTdmMTc1ZWIxMTEwNTE1OTE1ODQxN2QyY2H92/ZF: 00:23:14.197 11:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:23:14.198 11:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.198 11:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:14.198 11:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:14.198 11:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:14.198 11:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.198 11:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:14.198 11:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.198 11:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.198 11:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.198 11:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.198 11:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:14.198 11:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:14.198 11:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:14.198 11:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.198 11:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.198 11:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:14.198 11:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:14.198 11:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:14.198 11:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:14.198 11:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:14.198 11:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:14.198 11:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.198 11:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.131 nvme0n1 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2ZjOTRjYWY1YTZjZWI2YjI2NTYxYjA5YmQ3MDQ4YzAwNGM3NThmN2RlNDBlYjRkB+IQHg==: 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2ZjOTRjYWY1YTZjZWI2YjI2NTYxYjA5YmQ3MDQ4YzAwNGM3NThmN2RlNDBlYjRkB+IQHg==: 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: ]] 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDcxYjBhY2E4NWZkODI1YTkwNTBlZDA3ZTE5NTVhZGIqL577: 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.131 11:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.067 nvme0n1 00:23:16.067 11:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.067 11:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.067 11:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.067 11:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.067 11:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.067 11:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.067 11:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.067 11:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.067 11:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.067 11:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.067 11:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.067 11:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.067 11:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:23:16.067 11:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.067 11:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:16.067 11:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:16.067 11:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:16.067 11:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzU4OTM4YmM3ZDU3MjcwNWM1NjFiMmY0MmMxMWQxYmI4MjVkNGY3M2IzNTA2YzNjMmE2N2JhYmQ2Y2E5MTJmMmC8NWI=: 00:23:16.067 11:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:16.067 11:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:16.067 11:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:16.067 11:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzU4OTM4YmM3ZDU3MjcwNWM1NjFiMmY0MmMxMWQxYmI4MjVkNGY3M2IzNTA2YzNjMmE2N2JhYmQ2Y2E5MTJmMmC8NWI=: 00:23:16.067 11:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:16.067 11:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:23:16.067 11:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.067 11:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:16.067 11:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:16.067 11:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:16.067 11:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.067 11:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:16.067 11:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.067 11:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.067 11:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.067 11:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.067 11:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.067 11:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.067 11:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.067 11:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.067 11:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.067 11:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:16.067 11:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.067 11:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:16.067 11:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:16.067 11:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:16.067 11:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:16.067 11:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.067 11:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.001 nvme0n1 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWM1YmQ3MjAwMzYzYTEzYzU0YjNhMjMwMWVlYzkwZGIwNjhlNzQ2YzMzOWYzYzM0qUrOeA==: 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWM1YmQ3MjAwMzYzYTEzYzU0YjNhMjMwMWVlYzkwZGIwNjhlNzQ2YzMzOWYzYzM0qUrOeA==: 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: ]] 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmNjNWQ4MDU3MTNkMGM4MzUwZWExMGY3ZjljYTQ3ZGRkYzJmYmIxYTNiMDk4NTM1sTDa6w==: 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.001 11:27:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.001 request: 00:23:17.001 { 00:23:17.001 "name": "nvme0", 00:23:17.001 "trtype": "tcp", 00:23:17.001 "traddr": "10.0.0.1", 00:23:17.001 "adrfam": "ipv4", 00:23:17.002 "trsvcid": "4420", 00:23:17.002 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:17.002 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:17.002 "prchk_reftag": false, 00:23:17.002 "prchk_guard": false, 00:23:17.002 "hdgst": false, 00:23:17.002 "ddgst": false, 00:23:17.002 "method": "bdev_nvme_attach_controller", 00:23:17.002 "req_id": 1 00:23:17.002 } 00:23:17.002 Got JSON-RPC error response 00:23:17.002 response: 00:23:17.002 { 00:23:17.002 "code": -5, 00:23:17.002 "message": "Input/output error" 00:23:17.002 } 00:23:17.002 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:17.002 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:17.002 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:17.002 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:17.002 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:17.002 11:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.002 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.002 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.002 11:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:23:17.002 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.002 11:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:23:17.002 11:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:23:17.002 11:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.002 11:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.002 11:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.002 11:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.002 11:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.002 11:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:17.002 11:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.002 11:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:17.002 11:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:17.002 11:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:17.002 11:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:17.002 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:17.002 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:17.002 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:17.002 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:17.002 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:17.002 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:17.002 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:17.002 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.002 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.259 request: 00:23:17.259 { 00:23:17.259 "name": "nvme0", 00:23:17.259 "trtype": "tcp", 00:23:17.259 "traddr": "10.0.0.1", 00:23:17.259 "adrfam": "ipv4", 00:23:17.259 "trsvcid": "4420", 00:23:17.259 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:17.259 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:17.259 "prchk_reftag": false, 00:23:17.259 "prchk_guard": false, 00:23:17.259 "hdgst": false, 00:23:17.259 "ddgst": false, 00:23:17.259 "dhchap_key": "key2", 00:23:17.259 "method": "bdev_nvme_attach_controller", 00:23:17.259 "req_id": 1 00:23:17.259 } 00:23:17.259 Got JSON-RPC error response 00:23:17.259 response: 00:23:17.259 { 00:23:17.259 "code": -5, 00:23:17.259 "message": "Input/output error" 00:23:17.259 } 00:23:17.259 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:17.259 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:17.259 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:17.259 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:17.259 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:17.259 11:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.259 11:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:23:17.259 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.259 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.259 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.259 11:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:23:17.259 11:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:23:17.259 11:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.259 11:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.260 request: 00:23:17.260 { 00:23:17.260 "name": "nvme0", 00:23:17.260 "trtype": "tcp", 00:23:17.260 "traddr": "10.0.0.1", 00:23:17.260 "adrfam": "ipv4", 00:23:17.260 "trsvcid": "4420", 00:23:17.260 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:17.260 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:17.260 "prchk_reftag": false, 00:23:17.260 "prchk_guard": false, 00:23:17.260 "hdgst": false, 00:23:17.260 "ddgst": false, 00:23:17.260 "dhchap_key": "key1", 00:23:17.260 "dhchap_ctrlr_key": "ckey2", 00:23:17.260 "method": "bdev_nvme_attach_controller", 00:23:17.260 "req_id": 1 00:23:17.260 } 00:23:17.260 Got JSON-RPC error response 00:23:17.260 response: 00:23:17.260 { 00:23:17.260 "code": -5, 00:23:17.260 "message": "Input/output error" 00:23:17.260 } 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:17.260 rmmod nvme_tcp 00:23:17.260 rmmod nvme_fabrics 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 661709 ']' 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 661709 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 661709 ']' 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 661709 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 661709 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 661709' 00:23:17.260 killing process with pid 661709 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 661709 00:23:17.260 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 661709 00:23:17.521 11:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:17.521 11:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:17.521 11:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:17.521 11:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:17.521 11:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:17.521 11:27:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.521 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:17.521 11:27:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.464 11:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:19.722 11:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:19.722 11:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:19.723 11:27:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:23:19.723 11:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:23:19.723 11:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:23:19.723 11:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:19.723 11:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:19.723 11:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:19.723 11:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:19.723 11:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:19.723 11:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:19.723 11:27:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:21.099 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:21.099 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:21.099 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:21.099 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:21.099 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:21.099 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:21.099 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:21.099 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:21.099 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:21.099 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:21.099 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:21.099 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:21.099 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:21.099 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:21.099 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:21.099 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:22.036 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:23:22.036 11:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.mC9 /tmp/spdk.key-null.xzV /tmp/spdk.key-sha256.RfS /tmp/spdk.key-sha384.Cjp /tmp/spdk.key-sha512.knF /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:23:22.036 11:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:23.411 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:23:23.411 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:23:23.411 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:23:23.411 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:23:23.411 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:23:23.411 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:23:23.411 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:23:23.411 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:23:23.411 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:23:23.411 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:23:23.411 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:23:23.411 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:23:23.411 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:23:23.411 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:23:23.411 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:23:23.411 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:23:23.411 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:23:23.411 00:23:23.411 real 0m47.243s 00:23:23.411 user 0m45.022s 00:23:23.411 sys 0m5.813s 00:23:23.411 11:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:23.411 11:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.411 ************************************ 00:23:23.411 END TEST nvmf_auth_host 00:23:23.411 ************************************ 00:23:23.411 11:27:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:23.411 11:27:49 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:23:23.411 11:27:49 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:23.411 11:27:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:23.411 11:27:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:23.411 11:27:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:23.411 ************************************ 00:23:23.411 START TEST nvmf_digest 00:23:23.411 ************************************ 00:23:23.411 11:27:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:23.411 * Looking for test storage... 00:23:23.411 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:23.411 11:27:49 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:23.411 11:27:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:23:23.411 11:27:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:23.411 11:27:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:23.411 11:27:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:23.411 11:27:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:23.411 11:27:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:23.411 11:27:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:23.411 11:27:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:23.411 11:27:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:23.411 11:27:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:23.411 11:27:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:23.411 11:27:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:23.411 11:27:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:23.411 11:27:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:23.411 11:27:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:23.411 11:27:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:23.411 11:27:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:23.411 11:27:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:23.411 11:27:49 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:23.412 11:27:49 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:23.412 11:27:49 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:23.412 11:27:49 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.412 11:27:49 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.412 11:27:49 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.412 11:27:49 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:23:23.412 11:27:49 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.412 11:27:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:23:23.412 11:27:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:23.412 11:27:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:23.412 11:27:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:23.412 11:27:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:23.412 11:27:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:23.412 11:27:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:23.412 11:27:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:23.412 11:27:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:23.412 11:27:49 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:23:23.412 11:27:49 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:23:23.412 11:27:49 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:23:23.412 11:27:49 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:23:23.412 11:27:49 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:23:23.412 11:27:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:23.412 11:27:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:23.412 11:27:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:23.412 11:27:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:23.412 11:27:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:23.412 11:27:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.412 11:27:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:23.412 11:27:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.412 11:27:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:23.412 11:27:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:23.412 11:27:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:23:23.412 11:27:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:25.944 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:25.944 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:25.944 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:25.944 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:25.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:25.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:23:25.944 00:23:25.944 --- 10.0.0.2 ping statistics --- 00:23:25.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.944 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:25.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:25.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:23:25.944 00:23:25.944 --- 10.0.0.1 ping statistics --- 00:23:25.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.944 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:23:25.944 11:27:51 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:23:25.945 11:27:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:25.945 11:27:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:25.945 11:27:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:25.945 ************************************ 00:23:25.945 START TEST nvmf_digest_clean 00:23:25.945 ************************************ 00:23:25.945 11:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:23:25.945 11:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:23:25.945 11:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:23:25.945 11:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:23:25.945 11:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:23:25.945 11:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:23:25.945 11:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:25.945 11:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:25.945 11:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:25.945 11:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=671175 00:23:25.945 11:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:25.945 11:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 671175 00:23:25.945 11:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 671175 ']' 00:23:25.945 11:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.945 11:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:25.945 11:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.945 11:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:25.945 11:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:25.945 [2024-07-12 11:27:51.693301] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:23:25.945 [2024-07-12 11:27:51.693379] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.945 EAL: No free 2048 kB hugepages reported on node 1 00:23:25.945 [2024-07-12 11:27:51.756580] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.945 [2024-07-12 11:27:51.867745] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.945 [2024-07-12 11:27:51.867800] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.945 [2024-07-12 11:27:51.867815] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.945 [2024-07-12 11:27:51.867826] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.945 [2024-07-12 11:27:51.867835] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.945 [2024-07-12 11:27:51.867874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.945 11:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:25.945 11:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:23:25.945 11:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:25.945 11:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:25.945 11:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:25.945 11:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:25.945 11:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:23:25.945 11:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:23:25.945 11:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:23:25.945 11:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.945 11:27:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:25.945 null0 00:23:25.945 [2024-07-12 11:27:52.030402] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:25.945 [2024-07-12 11:27:52.054608] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.945 11:27:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.945 11:27:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:23:25.945 11:27:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:25.945 11:27:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:25.945 11:27:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:23:25.945 11:27:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:23:25.945 11:27:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:23:25.945 11:27:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:25.945 11:27:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=671200 00:23:25.945 11:27:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:25.945 11:27:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 671200 /var/tmp/bperf.sock 00:23:25.945 11:27:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 671200 ']' 00:23:25.945 11:27:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:25.945 11:27:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:25.945 11:27:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:25.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:25.945 11:27:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:25.945 11:27:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:26.203 [2024-07-12 11:27:52.099262] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:23:26.203 [2024-07-12 11:27:52.099324] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid671200 ] 00:23:26.203 EAL: No free 2048 kB hugepages reported on node 1 00:23:26.203 [2024-07-12 11:27:52.155643] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.203 [2024-07-12 11:27:52.259979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.460 11:27:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:26.460 11:27:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:23:26.460 11:27:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:26.460 11:27:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:26.460 11:27:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:26.718 11:27:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:26.718 11:27:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:27.295 nvme0n1 00:23:27.295 11:27:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:27.295 11:27:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:27.295 Running I/O for 2 seconds... 00:23:29.192 00:23:29.192 Latency(us) 00:23:29.192 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.192 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:29.192 nvme0n1 : 2.00 20249.60 79.10 0.00 0.00 6314.05 3276.80 14272.28 00:23:29.192 =================================================================================================================== 00:23:29.192 Total : 20249.60 79.10 0.00 0.00 6314.05 3276.80 14272.28 00:23:29.192 0 00:23:29.192 11:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:29.192 11:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:29.192 11:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:29.192 11:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:29.192 | select(.opcode=="crc32c") 00:23:29.192 | "\(.module_name) \(.executed)"' 00:23:29.192 11:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:29.449 11:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:29.449 11:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:29.449 11:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:29.449 11:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:29.449 11:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 671200 00:23:29.449 11:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 671200 ']' 00:23:29.449 11:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 671200 00:23:29.449 11:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:23:29.449 11:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:29.449 11:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 671200 00:23:29.449 11:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:29.449 11:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:29.449 11:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 671200' 00:23:29.449 killing process with pid 671200 00:23:29.449 11:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 671200 00:23:29.449 Received shutdown signal, test time was about 2.000000 seconds 00:23:29.449 00:23:29.449 Latency(us) 00:23:29.449 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.449 =================================================================================================================== 00:23:29.449 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:29.449 11:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 671200 00:23:29.707 11:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:23:29.707 11:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:29.707 11:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:29.965 11:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:23:29.965 11:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:23:29.965 11:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:23:29.965 11:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:29.965 11:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=671597 00:23:29.965 11:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:29.965 11:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 671597 /var/tmp/bperf.sock 00:23:29.965 11:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 671597 ']' 00:23:29.965 11:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:29.965 11:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:29.965 11:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:29.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:29.965 11:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:29.965 11:27:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:29.965 [2024-07-12 11:27:55.886007] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:23:29.965 [2024-07-12 11:27:55.886096] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid671597 ] 00:23:29.965 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:29.965 Zero copy mechanism will not be used. 00:23:29.965 EAL: No free 2048 kB hugepages reported on node 1 00:23:29.965 [2024-07-12 11:27:55.944535] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.965 [2024-07-12 11:27:56.049726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.222 11:27:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:30.223 11:27:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:23:30.223 11:27:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:30.223 11:27:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:30.223 11:27:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:30.480 11:27:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:30.480 11:27:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:30.737 nvme0n1 00:23:30.737 11:27:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:30.737 11:27:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:30.737 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:30.737 Zero copy mechanism will not be used. 00:23:30.737 Running I/O for 2 seconds... 00:23:33.260 00:23:33.260 Latency(us) 00:23:33.260 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.260 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:33.260 nvme0n1 : 2.00 6020.12 752.51 0.00 0.00 2653.63 679.63 4757.43 00:23:33.260 =================================================================================================================== 00:23:33.260 Total : 6020.12 752.51 0.00 0.00 2653.63 679.63 4757.43 00:23:33.260 0 00:23:33.260 11:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:33.260 11:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:33.260 11:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:33.260 11:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:33.260 11:27:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:33.260 | select(.opcode=="crc32c") 00:23:33.260 | "\(.module_name) \(.executed)"' 00:23:33.260 11:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:33.260 11:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:33.260 11:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:33.260 11:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:33.260 11:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 671597 00:23:33.260 11:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 671597 ']' 00:23:33.260 11:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 671597 00:23:33.260 11:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:23:33.260 11:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:33.260 11:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 671597 00:23:33.260 11:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:33.260 11:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:33.260 11:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 671597' 00:23:33.260 killing process with pid 671597 00:23:33.260 11:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 671597 00:23:33.260 Received shutdown signal, test time was about 2.000000 seconds 00:23:33.260 00:23:33.260 Latency(us) 00:23:33.260 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.260 =================================================================================================================== 00:23:33.260 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:33.260 11:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 671597 00:23:33.260 11:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:23:33.260 11:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:33.260 11:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:33.260 11:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:23:33.260 11:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:23:33.260 11:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:23:33.260 11:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:33.260 11:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=672015 00:23:33.260 11:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 672015 /var/tmp/bperf.sock 00:23:33.260 11:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:33.260 11:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 672015 ']' 00:23:33.260 11:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:33.260 11:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:33.260 11:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:33.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:33.260 11:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:33.260 11:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:33.518 [2024-07-12 11:27:59.423016] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:23:33.518 [2024-07-12 11:27:59.423099] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid672015 ] 00:23:33.518 EAL: No free 2048 kB hugepages reported on node 1 00:23:33.518 [2024-07-12 11:27:59.482403] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.518 [2024-07-12 11:27:59.587748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.518 11:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:33.518 11:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:23:33.518 11:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:33.518 11:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:33.518 11:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:34.085 11:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:34.085 11:27:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:34.342 nvme0n1 00:23:34.342 11:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:34.342 11:28:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:34.600 Running I/O for 2 seconds... 00:23:36.529 00:23:36.530 Latency(us) 00:23:36.530 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.530 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:36.530 nvme0n1 : 2.01 20196.90 78.89 0.00 0.00 6322.83 2669.99 10971.21 00:23:36.530 =================================================================================================================== 00:23:36.530 Total : 20196.90 78.89 0.00 0.00 6322.83 2669.99 10971.21 00:23:36.530 0 00:23:36.530 11:28:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:36.530 11:28:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:36.530 11:28:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:36.530 11:28:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:36.530 11:28:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:36.530 | select(.opcode=="crc32c") 00:23:36.530 | "\(.module_name) \(.executed)"' 00:23:36.787 11:28:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:36.787 11:28:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:36.787 11:28:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:36.787 11:28:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:36.787 11:28:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 672015 00:23:36.787 11:28:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 672015 ']' 00:23:36.787 11:28:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 672015 00:23:36.787 11:28:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:23:36.787 11:28:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:36.787 11:28:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 672015 00:23:36.787 11:28:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:36.787 11:28:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:36.787 11:28:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 672015' 00:23:36.787 killing process with pid 672015 00:23:36.787 11:28:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 672015 00:23:36.787 Received shutdown signal, test time was about 2.000000 seconds 00:23:36.787 00:23:36.787 Latency(us) 00:23:36.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.787 =================================================================================================================== 00:23:36.787 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:36.787 11:28:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 672015 00:23:37.083 11:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:23:37.083 11:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:37.083 11:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:37.083 11:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:23:37.083 11:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:23:37.083 11:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:23:37.083 11:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:37.083 11:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=672496 00:23:37.083 11:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:37.083 11:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 672496 /var/tmp/bperf.sock 00:23:37.083 11:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 672496 ']' 00:23:37.083 11:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:37.083 11:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:37.083 11:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:37.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:37.083 11:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:37.083 11:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:37.083 [2024-07-12 11:28:03.171135] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:23:37.083 [2024-07-12 11:28:03.171234] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid672496 ] 00:23:37.083 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:37.083 Zero copy mechanism will not be used. 00:23:37.364 EAL: No free 2048 kB hugepages reported on node 1 00:23:37.364 [2024-07-12 11:28:03.229134] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.364 [2024-07-12 11:28:03.333577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.364 11:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:37.364 11:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:23:37.364 11:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:37.364 11:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:37.364 11:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:37.621 11:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:37.621 11:28:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:38.182 nvme0n1 00:23:38.182 11:28:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:38.182 11:28:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:38.182 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:38.182 Zero copy mechanism will not be used. 00:23:38.182 Running I/O for 2 seconds... 00:23:40.704 00:23:40.704 Latency(us) 00:23:40.704 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.704 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:40.704 nvme0n1 : 2.00 6071.38 758.92 0.00 0.00 2628.49 1565.58 4296.25 00:23:40.704 =================================================================================================================== 00:23:40.704 Total : 6071.38 758.92 0.00 0.00 2628.49 1565.58 4296.25 00:23:40.704 0 00:23:40.704 11:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:40.704 11:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:40.704 11:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:40.704 11:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:40.704 11:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:40.704 | select(.opcode=="crc32c") 00:23:40.704 | "\(.module_name) \(.executed)"' 00:23:40.704 11:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:40.704 11:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:40.704 11:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:40.704 11:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:40.704 11:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 672496 00:23:40.704 11:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 672496 ']' 00:23:40.704 11:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 672496 00:23:40.704 11:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:23:40.704 11:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:40.704 11:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 672496 00:23:40.704 11:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:40.704 11:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:40.704 11:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 672496' 00:23:40.704 killing process with pid 672496 00:23:40.704 11:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 672496 00:23:40.704 Received shutdown signal, test time was about 2.000000 seconds 00:23:40.704 00:23:40.704 Latency(us) 00:23:40.704 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.704 =================================================================================================================== 00:23:40.704 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:40.704 11:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 672496 00:23:40.704 11:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 671175 00:23:40.704 11:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 671175 ']' 00:23:40.704 11:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 671175 00:23:40.704 11:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:23:40.704 11:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:40.704 11:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 671175 00:23:40.704 11:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:40.704 11:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:40.704 11:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 671175' 00:23:40.704 killing process with pid 671175 00:23:40.704 11:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 671175 00:23:40.704 11:28:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 671175 00:23:40.962 00:23:40.962 real 0m15.442s 00:23:40.962 user 0m30.619s 00:23:40.962 sys 0m4.194s 00:23:40.962 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:40.962 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:40.962 ************************************ 00:23:40.962 END TEST nvmf_digest_clean 00:23:40.962 ************************************ 00:23:41.221 11:28:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:23:41.221 11:28:07 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:23:41.221 11:28:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:41.221 11:28:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:41.221 11:28:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:41.221 ************************************ 00:23:41.221 START TEST nvmf_digest_error 00:23:41.221 ************************************ 00:23:41.221 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:23:41.221 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:23:41.221 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:41.221 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:41.221 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:41.221 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=672918 00:23:41.221 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:41.221 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 672918 00:23:41.221 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 672918 ']' 00:23:41.221 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.221 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:41.221 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:41.221 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:41.221 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:41.221 [2024-07-12 11:28:07.168097] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:23:41.221 [2024-07-12 11:28:07.168207] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:41.221 EAL: No free 2048 kB hugepages reported on node 1 00:23:41.221 [2024-07-12 11:28:07.231774] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.221 [2024-07-12 11:28:07.333140] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:41.221 [2024-07-12 11:28:07.333225] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:41.221 [2024-07-12 11:28:07.333249] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:41.221 [2024-07-12 11:28:07.333260] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:41.221 [2024-07-12 11:28:07.333270] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:41.221 [2024-07-12 11:28:07.333295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.480 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:41.480 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:23:41.480 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:41.480 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:41.480 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:41.480 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:41.480 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:23:41.480 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.480 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:41.480 [2024-07-12 11:28:07.401840] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:23:41.480 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.480 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:23:41.480 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:23:41.480 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.480 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:41.480 null0 00:23:41.480 [2024-07-12 11:28:07.516201] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:41.480 [2024-07-12 11:28:07.540391] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:41.480 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.480 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:23:41.480 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:41.480 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:23:41.480 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:23:41.480 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:23:41.480 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=673053 00:23:41.480 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 673053 /var/tmp/bperf.sock 00:23:41.480 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:23:41.480 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 673053 ']' 00:23:41.480 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:41.480 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:41.480 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:41.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:41.480 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:41.480 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:41.480 [2024-07-12 11:28:07.583397] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:23:41.480 [2024-07-12 11:28:07.583461] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid673053 ] 00:23:41.480 EAL: No free 2048 kB hugepages reported on node 1 00:23:41.738 [2024-07-12 11:28:07.640953] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.738 [2024-07-12 11:28:07.747987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.738 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:41.738 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:23:41.738 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:41.738 11:28:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:42.303 11:28:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:42.303 11:28:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.303 11:28:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:42.303 11:28:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.303 11:28:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:42.303 11:28:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:42.561 nvme0n1 00:23:42.561 11:28:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:42.561 11:28:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.561 11:28:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:42.561 11:28:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.561 11:28:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:42.561 11:28:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:42.561 Running I/O for 2 seconds... 00:23:42.561 [2024-07-12 11:28:08.646658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:42.561 [2024-07-12 11:28:08.646716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.561 [2024-07-12 11:28:08.646737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.561 [2024-07-12 11:28:08.663516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:42.561 [2024-07-12 11:28:08.663550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.561 [2024-07-12 11:28:08.663568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.561 [2024-07-12 11:28:08.675904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:42.561 [2024-07-12 11:28:08.675962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.561 [2024-07-12 11:28:08.675981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.561 [2024-07-12 11:28:08.688458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:42.561 [2024-07-12 11:28:08.688491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.561 [2024-07-12 11:28:08.688509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.819 [2024-07-12 11:28:08.701375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:42.819 [2024-07-12 11:28:08.701407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.819 [2024-07-12 11:28:08.701425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.819 [2024-07-12 11:28:08.712922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:42.819 [2024-07-12 11:28:08.712952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.819 [2024-07-12 11:28:08.712968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.819 [2024-07-12 11:28:08.725293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:42.819 [2024-07-12 11:28:08.725339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.819 [2024-07-12 11:28:08.725356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.819 [2024-07-12 11:28:08.738999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:42.819 [2024-07-12 11:28:08.739028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:25321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.819 [2024-07-12 11:28:08.739045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.819 [2024-07-12 11:28:08.753114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:42.819 [2024-07-12 11:28:08.753146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.819 [2024-07-12 11:28:08.753163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.819 [2024-07-12 11:28:08.768951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:42.819 [2024-07-12 11:28:08.768983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.819 [2024-07-12 11:28:08.769000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.819 [2024-07-12 11:28:08.780560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:42.819 [2024-07-12 11:28:08.780589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.819 [2024-07-12 11:28:08.780604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.819 [2024-07-12 11:28:08.795425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:42.819 [2024-07-12 11:28:08.795472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.819 [2024-07-12 11:28:08.795490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.819 [2024-07-12 11:28:08.811384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:42.819 [2024-07-12 11:28:08.811414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.819 [2024-07-12 11:28:08.811429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.819 [2024-07-12 11:28:08.822469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:42.819 [2024-07-12 11:28:08.822498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.819 [2024-07-12 11:28:08.822513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.819 [2024-07-12 11:28:08.837447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:42.819 [2024-07-12 11:28:08.837475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.820 [2024-07-12 11:28:08.837491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.820 [2024-07-12 11:28:08.848481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:42.820 [2024-07-12 11:28:08.848512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.820 [2024-07-12 11:28:08.848529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.820 [2024-07-12 11:28:08.863939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:42.820 [2024-07-12 11:28:08.863970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.820 [2024-07-12 11:28:08.863986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.820 [2024-07-12 11:28:08.875957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:42.820 [2024-07-12 11:28:08.875985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.820 [2024-07-12 11:28:08.876006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.820 [2024-07-12 11:28:08.888935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:42.820 [2024-07-12 11:28:08.888965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.820 [2024-07-12 11:28:08.888981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.820 [2024-07-12 11:28:08.901306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:42.820 [2024-07-12 11:28:08.901334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.820 [2024-07-12 11:28:08.901364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.820 [2024-07-12 11:28:08.912830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:42.820 [2024-07-12 11:28:08.912859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.820 [2024-07-12 11:28:08.912897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.820 [2024-07-12 11:28:08.925601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:42.820 [2024-07-12 11:28:08.925650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.820 [2024-07-12 11:28:08.925666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.820 [2024-07-12 11:28:08.938086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:42.820 [2024-07-12 11:28:08.938117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.820 [2024-07-12 11:28:08.938135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:42.820 [2024-07-12 11:28:08.951079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:42.820 [2024-07-12 11:28:08.951110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.820 [2024-07-12 11:28:08.951127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.077 [2024-07-12 11:28:08.963839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.077 [2024-07-12 11:28:08.963889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.077 [2024-07-12 11:28:08.963907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.077 [2024-07-12 11:28:08.976634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.078 [2024-07-12 11:28:08.976662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.078 [2024-07-12 11:28:08.976678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.078 [2024-07-12 11:28:08.989436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.078 [2024-07-12 11:28:08.989468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.078 [2024-07-12 11:28:08.989485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.078 [2024-07-12 11:28:09.000059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.078 [2024-07-12 11:28:09.000087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.078 [2024-07-12 11:28:09.000103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.078 [2024-07-12 11:28:09.013385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.078 [2024-07-12 11:28:09.013417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.078 [2024-07-12 11:28:09.013435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.078 [2024-07-12 11:28:09.025733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.078 [2024-07-12 11:28:09.025781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.078 [2024-07-12 11:28:09.025798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.078 [2024-07-12 11:28:09.039630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.078 [2024-07-12 11:28:09.039659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.078 [2024-07-12 11:28:09.039675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.078 [2024-07-12 11:28:09.054532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.078 [2024-07-12 11:28:09.054575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.078 [2024-07-12 11:28:09.054591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.078 [2024-07-12 11:28:09.065423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.078 [2024-07-12 11:28:09.065451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.078 [2024-07-12 11:28:09.065467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.078 [2024-07-12 11:28:09.078582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.078 [2024-07-12 11:28:09.078615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.078 [2024-07-12 11:28:09.078632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.078 [2024-07-12 11:28:09.089943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.078 [2024-07-12 11:28:09.089972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.078 [2024-07-12 11:28:09.089993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.078 [2024-07-12 11:28:09.102999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.078 [2024-07-12 11:28:09.103030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.078 [2024-07-12 11:28:09.103046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.078 [2024-07-12 11:28:09.117716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.078 [2024-07-12 11:28:09.117746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.078 [2024-07-12 11:28:09.117764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.078 [2024-07-12 11:28:09.128978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.078 [2024-07-12 11:28:09.129008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.078 [2024-07-12 11:28:09.129026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.078 [2024-07-12 11:28:09.142990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.078 [2024-07-12 11:28:09.143019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.078 [2024-07-12 11:28:09.143035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.078 [2024-07-12 11:28:09.157251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.078 [2024-07-12 11:28:09.157282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.078 [2024-07-12 11:28:09.157298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.078 [2024-07-12 11:28:09.168716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.078 [2024-07-12 11:28:09.168746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.078 [2024-07-12 11:28:09.168762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.078 [2024-07-12 11:28:09.181655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.078 [2024-07-12 11:28:09.181689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.078 [2024-07-12 11:28:09.181705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.078 [2024-07-12 11:28:09.194468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.078 [2024-07-12 11:28:09.194497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.078 [2024-07-12 11:28:09.194513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.078 [2024-07-12 11:28:09.207129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.078 [2024-07-12 11:28:09.207178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.078 [2024-07-12 11:28:09.207194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.336 [2024-07-12 11:28:09.220159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.336 [2024-07-12 11:28:09.220192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.336 [2024-07-12 11:28:09.220209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.336 [2024-07-12 11:28:09.232439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.336 [2024-07-12 11:28:09.232485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.336 [2024-07-12 11:28:09.232501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.336 [2024-07-12 11:28:09.245255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.336 [2024-07-12 11:28:09.245287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.337 [2024-07-12 11:28:09.245304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.337 [2024-07-12 11:28:09.256692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.337 [2024-07-12 11:28:09.256721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.337 [2024-07-12 11:28:09.256736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.337 [2024-07-12 11:28:09.270824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.337 [2024-07-12 11:28:09.270873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.337 [2024-07-12 11:28:09.270893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.337 [2024-07-12 11:28:09.283462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.337 [2024-07-12 11:28:09.283493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.337 [2024-07-12 11:28:09.283511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.337 [2024-07-12 11:28:09.295115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.337 [2024-07-12 11:28:09.295145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.337 [2024-07-12 11:28:09.295176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.337 [2024-07-12 11:28:09.307783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.337 [2024-07-12 11:28:09.307813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.337 [2024-07-12 11:28:09.307829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.337 [2024-07-12 11:28:09.320565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.337 [2024-07-12 11:28:09.320597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.337 [2024-07-12 11:28:09.320614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.337 [2024-07-12 11:28:09.333787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.337 [2024-07-12 11:28:09.333819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.337 [2024-07-12 11:28:09.333836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.337 [2024-07-12 11:28:09.344349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.337 [2024-07-12 11:28:09.344378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.337 [2024-07-12 11:28:09.344394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.337 [2024-07-12 11:28:09.358823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.337 [2024-07-12 11:28:09.358874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.337 [2024-07-12 11:28:09.358893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.337 [2024-07-12 11:28:09.372385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.337 [2024-07-12 11:28:09.372431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.337 [2024-07-12 11:28:09.372447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.337 [2024-07-12 11:28:09.384400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.337 [2024-07-12 11:28:09.384431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.337 [2024-07-12 11:28:09.384449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.337 [2024-07-12 11:28:09.396967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.337 [2024-07-12 11:28:09.396998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.337 [2024-07-12 11:28:09.397015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.337 [2024-07-12 11:28:09.409481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.337 [2024-07-12 11:28:09.409512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.337 [2024-07-12 11:28:09.409530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.337 [2024-07-12 11:28:09.422230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.337 [2024-07-12 11:28:09.422258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.337 [2024-07-12 11:28:09.422278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.337 [2024-07-12 11:28:09.435502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.337 [2024-07-12 11:28:09.435532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.337 [2024-07-12 11:28:09.435563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.337 [2024-07-12 11:28:09.447784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.337 [2024-07-12 11:28:09.447816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.337 [2024-07-12 11:28:09.447833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.337 [2024-07-12 11:28:09.460219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.337 [2024-07-12 11:28:09.460266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.337 [2024-07-12 11:28:09.460284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.596 [2024-07-12 11:28:09.472323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.596 [2024-07-12 11:28:09.472367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.596 [2024-07-12 11:28:09.472383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.596 [2024-07-12 11:28:09.484933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.596 [2024-07-12 11:28:09.484978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.596 [2024-07-12 11:28:09.484996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.596 [2024-07-12 11:28:09.498316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.596 [2024-07-12 11:28:09.498344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.596 [2024-07-12 11:28:09.498359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.596 [2024-07-12 11:28:09.512138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.596 [2024-07-12 11:28:09.512170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.596 [2024-07-12 11:28:09.512187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.596 [2024-07-12 11:28:09.523421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.596 [2024-07-12 11:28:09.523450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.596 [2024-07-12 11:28:09.523465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.596 [2024-07-12 11:28:09.538135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.596 [2024-07-12 11:28:09.538177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.596 [2024-07-12 11:28:09.538192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.596 [2024-07-12 11:28:09.552544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.596 [2024-07-12 11:28:09.552575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.596 [2024-07-12 11:28:09.552608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.596 [2024-07-12 11:28:09.563840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.596 [2024-07-12 11:28:09.563882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.596 [2024-07-12 11:28:09.563902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.596 [2024-07-12 11:28:09.577608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.596 [2024-07-12 11:28:09.577645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.596 [2024-07-12 11:28:09.577662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.596 [2024-07-12 11:28:09.588910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.596 [2024-07-12 11:28:09.588956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.596 [2024-07-12 11:28:09.588972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.596 [2024-07-12 11:28:09.603657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.596 [2024-07-12 11:28:09.603687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.596 [2024-07-12 11:28:09.603718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.596 [2024-07-12 11:28:09.614576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.596 [2024-07-12 11:28:09.614604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.596 [2024-07-12 11:28:09.614619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.596 [2024-07-12 11:28:09.628710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.596 [2024-07-12 11:28:09.628739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.596 [2024-07-12 11:28:09.628754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.596 [2024-07-12 11:28:09.640825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.596 [2024-07-12 11:28:09.640878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.596 [2024-07-12 11:28:09.640906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.596 [2024-07-12 11:28:09.655642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.596 [2024-07-12 11:28:09.655680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.596 [2024-07-12 11:28:09.655695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.596 [2024-07-12 11:28:09.666695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.596 [2024-07-12 11:28:09.666724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.596 [2024-07-12 11:28:09.666739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.596 [2024-07-12 11:28:09.682873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.596 [2024-07-12 11:28:09.682904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.596 [2024-07-12 11:28:09.682922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.596 [2024-07-12 11:28:09.697382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.596 [2024-07-12 11:28:09.697414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.596 [2024-07-12 11:28:09.697431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.596 [2024-07-12 11:28:09.708259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.596 [2024-07-12 11:28:09.708288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.596 [2024-07-12 11:28:09.708318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.596 [2024-07-12 11:28:09.722605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.596 [2024-07-12 11:28:09.722633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.596 [2024-07-12 11:28:09.722649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.855 [2024-07-12 11:28:09.739465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.855 [2024-07-12 11:28:09.739495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.855 [2024-07-12 11:28:09.739512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.855 [2024-07-12 11:28:09.753746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.855 [2024-07-12 11:28:09.753775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.855 [2024-07-12 11:28:09.753791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.855 [2024-07-12 11:28:09.768799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.855 [2024-07-12 11:28:09.768834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.855 [2024-07-12 11:28:09.768877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.855 [2024-07-12 11:28:09.783952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.855 [2024-07-12 11:28:09.783998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.855 [2024-07-12 11:28:09.784015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.855 [2024-07-12 11:28:09.796847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.855 [2024-07-12 11:28:09.796886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.855 [2024-07-12 11:28:09.796905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.855 [2024-07-12 11:28:09.810630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.855 [2024-07-12 11:28:09.810661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.855 [2024-07-12 11:28:09.810679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.855 [2024-07-12 11:28:09.821546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.855 [2024-07-12 11:28:09.821575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.855 [2024-07-12 11:28:09.821590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.855 [2024-07-12 11:28:09.835284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.855 [2024-07-12 11:28:09.835315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.855 [2024-07-12 11:28:09.835346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.855 [2024-07-12 11:28:09.851364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.855 [2024-07-12 11:28:09.851392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.855 [2024-07-12 11:28:09.851408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.855 [2024-07-12 11:28:09.862124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.855 [2024-07-12 11:28:09.862153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.855 [2024-07-12 11:28:09.862185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.855 [2024-07-12 11:28:09.876909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.855 [2024-07-12 11:28:09.876938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.855 [2024-07-12 11:28:09.876955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.855 [2024-07-12 11:28:09.891753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.855 [2024-07-12 11:28:09.891783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.855 [2024-07-12 11:28:09.891815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.855 [2024-07-12 11:28:09.907260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.855 [2024-07-12 11:28:09.907288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.855 [2024-07-12 11:28:09.907304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.855 [2024-07-12 11:28:09.920984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.855 [2024-07-12 11:28:09.921014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.855 [2024-07-12 11:28:09.921031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.855 [2024-07-12 11:28:09.935007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.855 [2024-07-12 11:28:09.935037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.855 [2024-07-12 11:28:09.935055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.855 [2024-07-12 11:28:09.947040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.855 [2024-07-12 11:28:09.947069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.855 [2024-07-12 11:28:09.947085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.855 [2024-07-12 11:28:09.961316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.855 [2024-07-12 11:28:09.961344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.855 [2024-07-12 11:28:09.961360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:43.855 [2024-07-12 11:28:09.975856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:43.855 [2024-07-12 11:28:09.975894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.855 [2024-07-12 11:28:09.975912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.114 [2024-07-12 11:28:09.987814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.114 [2024-07-12 11:28:09.987843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.114 [2024-07-12 11:28:09.987882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.114 [2024-07-12 11:28:09.999601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.114 [2024-07-12 11:28:09.999635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.114 [2024-07-12 11:28:09.999659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.114 [2024-07-12 11:28:10.013119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.114 [2024-07-12 11:28:10.013170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.114 [2024-07-12 11:28:10.013190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.114 [2024-07-12 11:28:10.027712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.114 [2024-07-12 11:28:10.027759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.114 [2024-07-12 11:28:10.027775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.114 [2024-07-12 11:28:10.041263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.114 [2024-07-12 11:28:10.041293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.114 [2024-07-12 11:28:10.041310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.114 [2024-07-12 11:28:10.053594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.114 [2024-07-12 11:28:10.053623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.114 [2024-07-12 11:28:10.053639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.114 [2024-07-12 11:28:10.067607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.114 [2024-07-12 11:28:10.067638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.114 [2024-07-12 11:28:10.067656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.114 [2024-07-12 11:28:10.078971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.114 [2024-07-12 11:28:10.079002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.114 [2024-07-12 11:28:10.079019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.114 [2024-07-12 11:28:10.091129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.114 [2024-07-12 11:28:10.091171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.114 [2024-07-12 11:28:10.091188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.114 [2024-07-12 11:28:10.103446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.114 [2024-07-12 11:28:10.103477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.114 [2024-07-12 11:28:10.103495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.114 [2024-07-12 11:28:10.115460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.114 [2024-07-12 11:28:10.115491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.114 [2024-07-12 11:28:10.115508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.114 [2024-07-12 11:28:10.128064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.114 [2024-07-12 11:28:10.128095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.114 [2024-07-12 11:28:10.128129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.114 [2024-07-12 11:28:10.141654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.114 [2024-07-12 11:28:10.141684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.114 [2024-07-12 11:28:10.141701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.114 [2024-07-12 11:28:10.153780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.114 [2024-07-12 11:28:10.153811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.114 [2024-07-12 11:28:10.153829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.114 [2024-07-12 11:28:10.166017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.114 [2024-07-12 11:28:10.166064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.114 [2024-07-12 11:28:10.166080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.114 [2024-07-12 11:28:10.178030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.114 [2024-07-12 11:28:10.178060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.114 [2024-07-12 11:28:10.178077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.114 [2024-07-12 11:28:10.192084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.114 [2024-07-12 11:28:10.192114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.114 [2024-07-12 11:28:10.192132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.114 [2024-07-12 11:28:10.202583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.114 [2024-07-12 11:28:10.202611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.115 [2024-07-12 11:28:10.202627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.115 [2024-07-12 11:28:10.216631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.115 [2024-07-12 11:28:10.216662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.115 [2024-07-12 11:28:10.216686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.115 [2024-07-12 11:28:10.229099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.115 [2024-07-12 11:28:10.229132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.115 [2024-07-12 11:28:10.229150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.115 [2024-07-12 11:28:10.241948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.115 [2024-07-12 11:28:10.241979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.115 [2024-07-12 11:28:10.241996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.373 [2024-07-12 11:28:10.254559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.373 [2024-07-12 11:28:10.254588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.373 [2024-07-12 11:28:10.254604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.373 [2024-07-12 11:28:10.268993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.373 [2024-07-12 11:28:10.269024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.373 [2024-07-12 11:28:10.269041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.373 [2024-07-12 11:28:10.280066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.373 [2024-07-12 11:28:10.280095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.373 [2024-07-12 11:28:10.280111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.373 [2024-07-12 11:28:10.294843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.373 [2024-07-12 11:28:10.294895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.373 [2024-07-12 11:28:10.294913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.373 [2024-07-12 11:28:10.310267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.373 [2024-07-12 11:28:10.310298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.373 [2024-07-12 11:28:10.310316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.373 [2024-07-12 11:28:10.321297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.373 [2024-07-12 11:28:10.321328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.373 [2024-07-12 11:28:10.321345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.373 [2024-07-12 11:28:10.337334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.373 [2024-07-12 11:28:10.337383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.373 [2024-07-12 11:28:10.337400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.373 [2024-07-12 11:28:10.347614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.374 [2024-07-12 11:28:10.347642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.374 [2024-07-12 11:28:10.347658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.374 [2024-07-12 11:28:10.360666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.374 [2024-07-12 11:28:10.360695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.374 [2024-07-12 11:28:10.360711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.374 [2024-07-12 11:28:10.374066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.374 [2024-07-12 11:28:10.374097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.374 [2024-07-12 11:28:10.374115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.374 [2024-07-12 11:28:10.386563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.374 [2024-07-12 11:28:10.386594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.374 [2024-07-12 11:28:10.386612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.374 [2024-07-12 11:28:10.398988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.374 [2024-07-12 11:28:10.399034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.374 [2024-07-12 11:28:10.399051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.374 [2024-07-12 11:28:10.411086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.374 [2024-07-12 11:28:10.411115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.374 [2024-07-12 11:28:10.411147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.374 [2024-07-12 11:28:10.423389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.374 [2024-07-12 11:28:10.423418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.374 [2024-07-12 11:28:10.423434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.374 [2024-07-12 11:28:10.435437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.374 [2024-07-12 11:28:10.435466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.374 [2024-07-12 11:28:10.435481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.374 [2024-07-12 11:28:10.449739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.374 [2024-07-12 11:28:10.449766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.374 [2024-07-12 11:28:10.449782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.374 [2024-07-12 11:28:10.460255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.374 [2024-07-12 11:28:10.460286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.374 [2024-07-12 11:28:10.460320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.374 [2024-07-12 11:28:10.475762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.374 [2024-07-12 11:28:10.475808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.374 [2024-07-12 11:28:10.475825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.374 [2024-07-12 11:28:10.490052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.374 [2024-07-12 11:28:10.490083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.374 [2024-07-12 11:28:10.490100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.374 [2024-07-12 11:28:10.501218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.374 [2024-07-12 11:28:10.501246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.374 [2024-07-12 11:28:10.501261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.632 [2024-07-12 11:28:10.517465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.632 [2024-07-12 11:28:10.517494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.632 [2024-07-12 11:28:10.517509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.632 [2024-07-12 11:28:10.534008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.632 [2024-07-12 11:28:10.534038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.632 [2024-07-12 11:28:10.534055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.632 [2024-07-12 11:28:10.550032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.632 [2024-07-12 11:28:10.550062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.632 [2024-07-12 11:28:10.550079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.632 [2024-07-12 11:28:10.561971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.632 [2024-07-12 11:28:10.562002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.632 [2024-07-12 11:28:10.562027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.632 [2024-07-12 11:28:10.572749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.632 [2024-07-12 11:28:10.572794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.632 [2024-07-12 11:28:10.572811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.632 [2024-07-12 11:28:10.587630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.632 [2024-07-12 11:28:10.587659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.632 [2024-07-12 11:28:10.587675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.632 [2024-07-12 11:28:10.602200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.632 [2024-07-12 11:28:10.602230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.632 [2024-07-12 11:28:10.602247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.632 [2024-07-12 11:28:10.618056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.632 [2024-07-12 11:28:10.618088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.632 [2024-07-12 11:28:10.618106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.632 [2024-07-12 11:28:10.628853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d5d50) 00:23:44.632 [2024-07-12 11:28:10.628905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.632 [2024-07-12 11:28:10.628923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:44.632 00:23:44.632 Latency(us) 00:23:44.632 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.632 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:44.632 nvme0n1 : 2.00 19431.65 75.90 0.00 0.00 6579.61 3422.44 19223.89 00:23:44.632 =================================================================================================================== 00:23:44.633 Total : 19431.65 75.90 0.00 0.00 6579.61 3422.44 19223.89 00:23:44.633 0 00:23:44.633 11:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:44.633 11:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:44.633 11:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:44.633 | .driver_specific 00:23:44.633 | .nvme_error 00:23:44.633 | .status_code 00:23:44.633 | .command_transient_transport_error' 00:23:44.633 11:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:44.891 11:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 152 > 0 )) 00:23:44.891 11:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 673053 00:23:44.891 11:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 673053 ']' 00:23:44.891 11:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 673053 00:23:44.891 11:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:23:44.891 11:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:44.891 11:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 673053 00:23:44.891 11:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:44.891 11:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:44.891 11:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 673053' 00:23:44.891 killing process with pid 673053 00:23:44.891 11:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 673053 00:23:44.891 Received shutdown signal, test time was about 2.000000 seconds 00:23:44.891 00:23:44.891 Latency(us) 00:23:44.891 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.891 =================================================================================================================== 00:23:44.891 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:44.891 11:28:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 673053 00:23:45.149 11:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:23:45.149 11:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:45.149 11:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:23:45.149 11:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:23:45.149 11:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:23:45.149 11:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=673448 00:23:45.149 11:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:23:45.149 11:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 673448 /var/tmp/bperf.sock 00:23:45.149 11:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 673448 ']' 00:23:45.149 11:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:45.149 11:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:45.149 11:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:45.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:45.149 11:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:45.149 11:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:45.149 [2024-07-12 11:28:11.264775] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:23:45.149 [2024-07-12 11:28:11.264875] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid673448 ] 00:23:45.149 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:45.149 Zero copy mechanism will not be used. 00:23:45.408 EAL: No free 2048 kB hugepages reported on node 1 00:23:45.408 [2024-07-12 11:28:11.324195] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.408 [2024-07-12 11:28:11.426859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:45.408 11:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:45.408 11:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:23:45.408 11:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:45.408 11:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:45.973 11:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:45.973 11:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.973 11:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:45.973 11:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.973 11:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:45.973 11:28:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:46.231 nvme0n1 00:23:46.231 11:28:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:46.231 11:28:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.231 11:28:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:46.231 11:28:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.231 11:28:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:46.231 11:28:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:46.231 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:46.231 Zero copy mechanism will not be used. 00:23:46.231 Running I/O for 2 seconds... 00:23:46.231 [2024-07-12 11:28:12.255714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.231 [2024-07-12 11:28:12.255774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.231 [2024-07-12 11:28:12.255794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.231 [2024-07-12 11:28:12.261219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.231 [2024-07-12 11:28:12.261268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.231 [2024-07-12 11:28:12.261292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.231 [2024-07-12 11:28:12.265231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.231 [2024-07-12 11:28:12.265264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.231 [2024-07-12 11:28:12.265289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.231 [2024-07-12 11:28:12.269244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.231 [2024-07-12 11:28:12.269276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.231 [2024-07-12 11:28:12.269299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.231 [2024-07-12 11:28:12.272347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.231 [2024-07-12 11:28:12.272377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.231 [2024-07-12 11:28:12.272395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.231 [2024-07-12 11:28:12.275529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.231 [2024-07-12 11:28:12.275559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.231 [2024-07-12 11:28:12.275577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.231 [2024-07-12 11:28:12.279568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.231 [2024-07-12 11:28:12.279598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.231 [2024-07-12 11:28:12.279616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.231 [2024-07-12 11:28:12.284503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.231 [2024-07-12 11:28:12.284549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.231 [2024-07-12 11:28:12.284566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.231 [2024-07-12 11:28:12.289890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.231 [2024-07-12 11:28:12.289938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.231 [2024-07-12 11:28:12.289955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.231 [2024-07-12 11:28:12.294896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.231 [2024-07-12 11:28:12.294927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.231 [2024-07-12 11:28:12.294961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.231 [2024-07-12 11:28:12.299839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.231 [2024-07-12 11:28:12.299889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.231 [2024-07-12 11:28:12.299923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.231 [2024-07-12 11:28:12.305248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.231 [2024-07-12 11:28:12.305280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.231 [2024-07-12 11:28:12.305321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.231 [2024-07-12 11:28:12.311087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.231 [2024-07-12 11:28:12.311120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.231 [2024-07-12 11:28:12.311146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.231 [2024-07-12 11:28:12.317077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.231 [2024-07-12 11:28:12.317108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.231 [2024-07-12 11:28:12.317126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.231 [2024-07-12 11:28:12.322534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.231 [2024-07-12 11:28:12.322583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.231 [2024-07-12 11:28:12.322602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.231 [2024-07-12 11:28:12.328202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.231 [2024-07-12 11:28:12.328249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.231 [2024-07-12 11:28:12.328267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.231 [2024-07-12 11:28:12.333888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.231 [2024-07-12 11:28:12.333917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.231 [2024-07-12 11:28:12.333934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.231 [2024-07-12 11:28:12.339864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.231 [2024-07-12 11:28:12.339913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.231 [2024-07-12 11:28:12.339933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.231 [2024-07-12 11:28:12.345729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.231 [2024-07-12 11:28:12.345775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.231 [2024-07-12 11:28:12.345792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.231 [2024-07-12 11:28:12.351568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.232 [2024-07-12 11:28:12.351599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.232 [2024-07-12 11:28:12.351633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.232 [2024-07-12 11:28:12.357844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.232 [2024-07-12 11:28:12.357886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.232 [2024-07-12 11:28:12.357905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.491 [2024-07-12 11:28:12.363274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.491 [2024-07-12 11:28:12.363311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.491 [2024-07-12 11:28:12.363330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.491 [2024-07-12 11:28:12.369790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.491 [2024-07-12 11:28:12.369836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.491 [2024-07-12 11:28:12.369854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.491 [2024-07-12 11:28:12.376418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.491 [2024-07-12 11:28:12.376449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.491 [2024-07-12 11:28:12.376468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.491 [2024-07-12 11:28:12.381994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.491 [2024-07-12 11:28:12.382025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.491 [2024-07-12 11:28:12.382043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.491 [2024-07-12 11:28:12.387671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.491 [2024-07-12 11:28:12.387703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.491 [2024-07-12 11:28:12.387721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.491 [2024-07-12 11:28:12.393387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.491 [2024-07-12 11:28:12.393418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.491 [2024-07-12 11:28:12.393436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.491 [2024-07-12 11:28:12.398897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.491 [2024-07-12 11:28:12.398928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.491 [2024-07-12 11:28:12.398946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.491 [2024-07-12 11:28:12.404433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.491 [2024-07-12 11:28:12.404465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.491 [2024-07-12 11:28:12.404496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.491 [2024-07-12 11:28:12.411062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.491 [2024-07-12 11:28:12.411093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.491 [2024-07-12 11:28:12.411111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.491 [2024-07-12 11:28:12.417539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.491 [2024-07-12 11:28:12.417571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.491 [2024-07-12 11:28:12.417603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.491 [2024-07-12 11:28:12.422435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.491 [2024-07-12 11:28:12.422467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.491 [2024-07-12 11:28:12.422485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.491 [2024-07-12 11:28:12.425876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.491 [2024-07-12 11:28:12.425905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.491 [2024-07-12 11:28:12.425927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.491 [2024-07-12 11:28:12.432113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.491 [2024-07-12 11:28:12.432158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.491 [2024-07-12 11:28:12.432175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.491 [2024-07-12 11:28:12.437504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.491 [2024-07-12 11:28:12.437534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.491 [2024-07-12 11:28:12.437551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.491 [2024-07-12 11:28:12.442194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.491 [2024-07-12 11:28:12.442223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.491 [2024-07-12 11:28:12.442240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.491 [2024-07-12 11:28:12.446928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.491 [2024-07-12 11:28:12.446958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.491 [2024-07-12 11:28:12.446976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.491 [2024-07-12 11:28:12.451565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.491 [2024-07-12 11:28:12.451594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.491 [2024-07-12 11:28:12.451612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.491 [2024-07-12 11:28:12.456352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.491 [2024-07-12 11:28:12.456381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.491 [2024-07-12 11:28:12.456409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.491 [2024-07-12 11:28:12.461554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.491 [2024-07-12 11:28:12.461582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.491 [2024-07-12 11:28:12.461599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.491 [2024-07-12 11:28:12.465801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.491 [2024-07-12 11:28:12.465830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.491 [2024-07-12 11:28:12.465848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.491 [2024-07-12 11:28:12.470112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.491 [2024-07-12 11:28:12.470156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.491 [2024-07-12 11:28:12.470173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.491 [2024-07-12 11:28:12.474726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.491 [2024-07-12 11:28:12.474771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.491 [2024-07-12 11:28:12.474788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.491 [2024-07-12 11:28:12.479694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.491 [2024-07-12 11:28:12.479724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.491 [2024-07-12 11:28:12.479745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.491 [2024-07-12 11:28:12.484644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.491 [2024-07-12 11:28:12.484674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.491 [2024-07-12 11:28:12.484692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.491 [2024-07-12 11:28:12.490145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.491 [2024-07-12 11:28:12.490189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.491 [2024-07-12 11:28:12.490208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.491 [2024-07-12 11:28:12.495001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.491 [2024-07-12 11:28:12.495032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.491 [2024-07-12 11:28:12.495049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.492 [2024-07-12 11:28:12.498841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.492 [2024-07-12 11:28:12.498877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.492 [2024-07-12 11:28:12.498896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.492 [2024-07-12 11:28:12.503312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.492 [2024-07-12 11:28:12.503341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.492 [2024-07-12 11:28:12.503359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.492 [2024-07-12 11:28:12.507964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.492 [2024-07-12 11:28:12.507994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.492 [2024-07-12 11:28:12.508013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.492 [2024-07-12 11:28:12.512772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.492 [2024-07-12 11:28:12.512803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.492 [2024-07-12 11:28:12.512824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.492 [2024-07-12 11:28:12.518108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.492 [2024-07-12 11:28:12.518139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.492 [2024-07-12 11:28:12.518166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.492 [2024-07-12 11:28:12.523137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.492 [2024-07-12 11:28:12.523169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.492 [2024-07-12 11:28:12.523186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.492 [2024-07-12 11:28:12.527774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.492 [2024-07-12 11:28:12.527805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.492 [2024-07-12 11:28:12.527822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.492 [2024-07-12 11:28:12.532554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.492 [2024-07-12 11:28:12.532585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.492 [2024-07-12 11:28:12.532602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.492 [2024-07-12 11:28:12.537193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.492 [2024-07-12 11:28:12.537223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.492 [2024-07-12 11:28:12.537246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.492 [2024-07-12 11:28:12.541909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.492 [2024-07-12 11:28:12.541939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.492 [2024-07-12 11:28:12.541956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.492 [2024-07-12 11:28:12.546809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.492 [2024-07-12 11:28:12.546838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.492 [2024-07-12 11:28:12.546855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.492 [2024-07-12 11:28:12.552308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.492 [2024-07-12 11:28:12.552340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.492 [2024-07-12 11:28:12.552357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.492 [2024-07-12 11:28:12.559013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.492 [2024-07-12 11:28:12.559044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.492 [2024-07-12 11:28:12.559062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.492 [2024-07-12 11:28:12.566816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.492 [2024-07-12 11:28:12.566847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.492 [2024-07-12 11:28:12.566872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.492 [2024-07-12 11:28:12.574402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.492 [2024-07-12 11:28:12.574433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.492 [2024-07-12 11:28:12.574451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.492 [2024-07-12 11:28:12.582166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.492 [2024-07-12 11:28:12.582197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.492 [2024-07-12 11:28:12.582230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.492 [2024-07-12 11:28:12.589829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.492 [2024-07-12 11:28:12.589861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.492 [2024-07-12 11:28:12.589887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.492 [2024-07-12 11:28:12.597648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.492 [2024-07-12 11:28:12.597685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.492 [2024-07-12 11:28:12.597703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.492 [2024-07-12 11:28:12.605314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.492 [2024-07-12 11:28:12.605345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.492 [2024-07-12 11:28:12.605363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.492 [2024-07-12 11:28:12.613011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.492 [2024-07-12 11:28:12.613042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.492 [2024-07-12 11:28:12.613060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.492 [2024-07-12 11:28:12.620629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.492 [2024-07-12 11:28:12.620660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.492 [2024-07-12 11:28:12.620677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.751 [2024-07-12 11:28:12.628153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.752 [2024-07-12 11:28:12.628185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.752 [2024-07-12 11:28:12.628202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.752 [2024-07-12 11:28:12.635762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.752 [2024-07-12 11:28:12.635793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.752 [2024-07-12 11:28:12.635811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.752 [2024-07-12 11:28:12.643261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.752 [2024-07-12 11:28:12.643292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.752 [2024-07-12 11:28:12.643310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.752 [2024-07-12 11:28:12.650901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.752 [2024-07-12 11:28:12.650932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.752 [2024-07-12 11:28:12.650950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.752 [2024-07-12 11:28:12.658480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.752 [2024-07-12 11:28:12.658512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.752 [2024-07-12 11:28:12.658529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.752 [2024-07-12 11:28:12.666104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.752 [2024-07-12 11:28:12.666136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.752 [2024-07-12 11:28:12.666154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.752 [2024-07-12 11:28:12.673507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.752 [2024-07-12 11:28:12.673538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.752 [2024-07-12 11:28:12.673556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.752 [2024-07-12 11:28:12.679683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.752 [2024-07-12 11:28:12.679715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.752 [2024-07-12 11:28:12.679732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.752 [2024-07-12 11:28:12.686316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.752 [2024-07-12 11:28:12.686348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.752 [2024-07-12 11:28:12.686366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.752 [2024-07-12 11:28:12.692674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.752 [2024-07-12 11:28:12.692705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.752 [2024-07-12 11:28:12.692723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.752 [2024-07-12 11:28:12.698471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.752 [2024-07-12 11:28:12.698503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.752 [2024-07-12 11:28:12.698520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.752 [2024-07-12 11:28:12.704278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.752 [2024-07-12 11:28:12.704310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.752 [2024-07-12 11:28:12.704328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.752 [2024-07-12 11:28:12.710295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.752 [2024-07-12 11:28:12.710327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.752 [2024-07-12 11:28:12.710344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.752 [2024-07-12 11:28:12.715739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.752 [2024-07-12 11:28:12.715771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.752 [2024-07-12 11:28:12.715794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.752 [2024-07-12 11:28:12.721211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.752 [2024-07-12 11:28:12.721244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.752 [2024-07-12 11:28:12.721261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.752 [2024-07-12 11:28:12.726997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.752 [2024-07-12 11:28:12.727028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.752 [2024-07-12 11:28:12.727046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.752 [2024-07-12 11:28:12.732622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.752 [2024-07-12 11:28:12.732653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.752 [2024-07-12 11:28:12.732671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.752 [2024-07-12 11:28:12.738127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.752 [2024-07-12 11:28:12.738158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.752 [2024-07-12 11:28:12.738176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.752 [2024-07-12 11:28:12.743733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.752 [2024-07-12 11:28:12.743765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.752 [2024-07-12 11:28:12.743783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.752 [2024-07-12 11:28:12.749348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.752 [2024-07-12 11:28:12.749379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.752 [2024-07-12 11:28:12.749397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.752 [2024-07-12 11:28:12.755082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.752 [2024-07-12 11:28:12.755114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.752 [2024-07-12 11:28:12.755131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.752 [2024-07-12 11:28:12.760550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.752 [2024-07-12 11:28:12.760582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.752 [2024-07-12 11:28:12.760600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.752 [2024-07-12 11:28:12.766207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.752 [2024-07-12 11:28:12.766244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.752 [2024-07-12 11:28:12.766263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.752 [2024-07-12 11:28:12.771932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.752 [2024-07-12 11:28:12.771963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.752 [2024-07-12 11:28:12.771981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.752 [2024-07-12 11:28:12.777663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.752 [2024-07-12 11:28:12.777695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.752 [2024-07-12 11:28:12.777713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.752 [2024-07-12 11:28:12.783083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.752 [2024-07-12 11:28:12.783114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.752 [2024-07-12 11:28:12.783132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.752 [2024-07-12 11:28:12.786025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.752 [2024-07-12 11:28:12.786053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.752 [2024-07-12 11:28:12.786070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.752 [2024-07-12 11:28:12.791014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.752 [2024-07-12 11:28:12.791044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.752 [2024-07-12 11:28:12.791060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.752 [2024-07-12 11:28:12.795977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.752 [2024-07-12 11:28:12.796007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.753 [2024-07-12 11:28:12.796024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.753 [2024-07-12 11:28:12.801411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.753 [2024-07-12 11:28:12.801441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.753 [2024-07-12 11:28:12.801458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.753 [2024-07-12 11:28:12.806740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.753 [2024-07-12 11:28:12.806770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.753 [2024-07-12 11:28:12.806787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.753 [2024-07-12 11:28:12.811879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.753 [2024-07-12 11:28:12.811910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.753 [2024-07-12 11:28:12.811927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.753 [2024-07-12 11:28:12.816457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.753 [2024-07-12 11:28:12.816486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.753 [2024-07-12 11:28:12.816502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.753 [2024-07-12 11:28:12.821015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.753 [2024-07-12 11:28:12.821044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.753 [2024-07-12 11:28:12.821061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.753 [2024-07-12 11:28:12.825685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.753 [2024-07-12 11:28:12.825714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.753 [2024-07-12 11:28:12.825730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.753 [2024-07-12 11:28:12.830250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.753 [2024-07-12 11:28:12.830280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.753 [2024-07-12 11:28:12.830296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.753 [2024-07-12 11:28:12.834933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.753 [2024-07-12 11:28:12.834963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.753 [2024-07-12 11:28:12.834980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.753 [2024-07-12 11:28:12.839420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.753 [2024-07-12 11:28:12.839449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.753 [2024-07-12 11:28:12.839466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.753 [2024-07-12 11:28:12.844212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.753 [2024-07-12 11:28:12.844241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.753 [2024-07-12 11:28:12.844257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.753 [2024-07-12 11:28:12.849388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.753 [2024-07-12 11:28:12.849418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.753 [2024-07-12 11:28:12.849441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.753 [2024-07-12 11:28:12.855064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.753 [2024-07-12 11:28:12.855094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.753 [2024-07-12 11:28:12.855111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.753 [2024-07-12 11:28:12.860784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.753 [2024-07-12 11:28:12.860814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.753 [2024-07-12 11:28:12.860831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:46.753 [2024-07-12 11:28:12.866139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.753 [2024-07-12 11:28:12.866170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.753 [2024-07-12 11:28:12.866203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:46.753 [2024-07-12 11:28:12.871137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.753 [2024-07-12 11:28:12.871184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.753 [2024-07-12 11:28:12.871200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.753 [2024-07-12 11:28:12.876677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.753 [2024-07-12 11:28:12.876708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.753 [2024-07-12 11:28:12.876741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:46.753 [2024-07-12 11:28:12.881483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:46.753 [2024-07-12 11:28:12.881513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.753 [2024-07-12 11:28:12.881531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.012 [2024-07-12 11:28:12.886046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.012 [2024-07-12 11:28:12.886076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-12 11:28:12.886093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.012 [2024-07-12 11:28:12.890758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.012 [2024-07-12 11:28:12.890787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-12 11:28:12.890804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.012 [2024-07-12 11:28:12.895315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.012 [2024-07-12 11:28:12.895360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-12 11:28:12.895377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.012 [2024-07-12 11:28:12.900057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.012 [2024-07-12 11:28:12.900086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-12 11:28:12.900104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.012 [2024-07-12 11:28:12.905080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.012 [2024-07-12 11:28:12.905111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-12 11:28:12.905143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.012 [2024-07-12 11:28:12.910584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.012 [2024-07-12 11:28:12.910617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-12 11:28:12.910634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.012 [2024-07-12 11:28:12.914920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.012 [2024-07-12 11:28:12.914951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-12 11:28:12.914968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.012 [2024-07-12 11:28:12.919537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.012 [2024-07-12 11:28:12.919568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-12 11:28:12.919585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.012 [2024-07-12 11:28:12.924013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.012 [2024-07-12 11:28:12.924043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-12 11:28:12.924060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.012 [2024-07-12 11:28:12.928659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.012 [2024-07-12 11:28:12.928690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-12 11:28:12.928707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.012 [2024-07-12 11:28:12.933190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.012 [2024-07-12 11:28:12.933221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-12 11:28:12.933245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.012 [2024-07-12 11:28:12.937916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.012 [2024-07-12 11:28:12.937945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-12 11:28:12.937961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.012 [2024-07-12 11:28:12.943416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.012 [2024-07-12 11:28:12.943447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-12 11:28:12.943479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.012 [2024-07-12 11:28:12.948599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.012 [2024-07-12 11:28:12.948631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-12 11:28:12.948663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.012 [2024-07-12 11:28:12.953612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.012 [2024-07-12 11:28:12.953644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-12 11:28:12.953661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.012 [2024-07-12 11:28:12.959060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.012 [2024-07-12 11:28:12.959092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-12 11:28:12.959110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.012 [2024-07-12 11:28:12.963750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.012 [2024-07-12 11:28:12.963781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-12 11:28:12.963798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.012 [2024-07-12 11:28:12.968550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.012 [2024-07-12 11:28:12.968580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-12 11:28:12.968598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.012 [2024-07-12 11:28:12.973325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.012 [2024-07-12 11:28:12.973356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-12 11:28:12.973373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.012 [2024-07-12 11:28:12.978187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.012 [2024-07-12 11:28:12.978223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-12 11:28:12.978241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.012 [2024-07-12 11:28:12.984102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.012 [2024-07-12 11:28:12.984133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-12 11:28:12.984150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.012 [2024-07-12 11:28:12.989238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.012 [2024-07-12 11:28:12.989269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-12 11:28:12.989285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.012 [2024-07-12 11:28:12.994177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.012 [2024-07-12 11:28:12.994208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-12 11:28:12.994225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.012 [2024-07-12 11:28:12.998859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.012 [2024-07-12 11:28:12.998899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-12 11:28:12.998917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.012 [2024-07-12 11:28:13.003740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.012 [2024-07-12 11:28:13.003771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-12 11:28:13.003788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.012 [2024-07-12 11:28:13.008471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.012 [2024-07-12 11:28:13.008502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-12 11:28:13.008518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.012 [2024-07-12 11:28:13.013435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.012 [2024-07-12 11:28:13.013466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-12 11:28:13.013484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.012 [2024-07-12 11:28:13.016599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.012 [2024-07-12 11:28:13.016645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-12 11:28:13.016663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.012 [2024-07-12 11:28:13.021924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.012 [2024-07-12 11:28:13.021956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-12 11:28:13.021973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.012 [2024-07-12 11:28:13.026445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.012 [2024-07-12 11:28:13.026475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.012 [2024-07-12 11:28:13.026492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.012 [2024-07-12 11:28:13.031635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.012 [2024-07-12 11:28:13.031665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.013 [2024-07-12 11:28:13.031682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.013 [2024-07-12 11:28:13.036629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.013 [2024-07-12 11:28:13.036660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.013 [2024-07-12 11:28:13.036677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.013 [2024-07-12 11:28:13.041286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.013 [2024-07-12 11:28:13.041315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.013 [2024-07-12 11:28:13.041331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.013 [2024-07-12 11:28:13.045749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.013 [2024-07-12 11:28:13.045778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.013 [2024-07-12 11:28:13.045794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.013 [2024-07-12 11:28:13.050398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.013 [2024-07-12 11:28:13.050427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.013 [2024-07-12 11:28:13.050443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.013 [2024-07-12 11:28:13.054941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.013 [2024-07-12 11:28:13.054971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.013 [2024-07-12 11:28:13.054988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.013 [2024-07-12 11:28:13.059488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.013 [2024-07-12 11:28:13.059516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.013 [2024-07-12 11:28:13.059538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.013 [2024-07-12 11:28:13.063955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.013 [2024-07-12 11:28:13.063986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.013 [2024-07-12 11:28:13.064003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.013 [2024-07-12 11:28:13.069179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.013 [2024-07-12 11:28:13.069207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.013 [2024-07-12 11:28:13.069223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.013 [2024-07-12 11:28:13.073311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.013 [2024-07-12 11:28:13.073340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.013 [2024-07-12 11:28:13.073356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.013 [2024-07-12 11:28:13.078002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.013 [2024-07-12 11:28:13.078032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.013 [2024-07-12 11:28:13.078049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.013 [2024-07-12 11:28:13.082638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.013 [2024-07-12 11:28:13.082667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.013 [2024-07-12 11:28:13.082683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.013 [2024-07-12 11:28:13.087427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.013 [2024-07-12 11:28:13.087456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.013 [2024-07-12 11:28:13.087471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.013 [2024-07-12 11:28:13.092382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.013 [2024-07-12 11:28:13.092427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.013 [2024-07-12 11:28:13.092443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.013 [2024-07-12 11:28:13.097818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.013 [2024-07-12 11:28:13.097850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.013 [2024-07-12 11:28:13.097874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.013 [2024-07-12 11:28:13.102666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.013 [2024-07-12 11:28:13.102704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.013 [2024-07-12 11:28:13.102722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.013 [2024-07-12 11:28:13.107697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.013 [2024-07-12 11:28:13.107726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.013 [2024-07-12 11:28:13.107743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.013 [2024-07-12 11:28:13.113168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.013 [2024-07-12 11:28:13.113199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.013 [2024-07-12 11:28:13.113217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.013 [2024-07-12 11:28:13.118634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.013 [2024-07-12 11:28:13.118665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.013 [2024-07-12 11:28:13.118681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.013 [2024-07-12 11:28:13.124890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.013 [2024-07-12 11:28:13.124922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.013 [2024-07-12 11:28:13.124940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.013 [2024-07-12 11:28:13.132552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.013 [2024-07-12 11:28:13.132583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.013 [2024-07-12 11:28:13.132600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.013 [2024-07-12 11:28:13.139523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.013 [2024-07-12 11:28:13.139555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.013 [2024-07-12 11:28:13.139573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.272 [2024-07-12 11:28:13.145109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.272 [2024-07-12 11:28:13.145142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.272 [2024-07-12 11:28:13.145160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.272 [2024-07-12 11:28:13.150720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.272 [2024-07-12 11:28:13.150751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.272 [2024-07-12 11:28:13.150783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.272 [2024-07-12 11:28:13.155715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.272 [2024-07-12 11:28:13.155763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.272 [2024-07-12 11:28:13.155780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.272 [2024-07-12 11:28:13.160816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.272 [2024-07-12 11:28:13.160863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.272 [2024-07-12 11:28:13.160889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.272 [2024-07-12 11:28:13.165639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.272 [2024-07-12 11:28:13.165686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.272 [2024-07-12 11:28:13.165703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.272 [2024-07-12 11:28:13.170430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.272 [2024-07-12 11:28:13.170461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.272 [2024-07-12 11:28:13.170478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.272 [2024-07-12 11:28:13.176453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.272 [2024-07-12 11:28:13.176501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.272 [2024-07-12 11:28:13.176518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.272 [2024-07-12 11:28:13.181371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.272 [2024-07-12 11:28:13.181402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.272 [2024-07-12 11:28:13.181419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.272 [2024-07-12 11:28:13.186333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.272 [2024-07-12 11:28:13.186379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.272 [2024-07-12 11:28:13.186396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.272 [2024-07-12 11:28:13.191169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.272 [2024-07-12 11:28:13.191200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.272 [2024-07-12 11:28:13.191217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.272 [2024-07-12 11:28:13.195891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.272 [2024-07-12 11:28:13.195926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.272 [2024-07-12 11:28:13.195944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.272 [2024-07-12 11:28:13.200472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.272 [2024-07-12 11:28:13.200503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.272 [2024-07-12 11:28:13.200521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.272 [2024-07-12 11:28:13.205087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.272 [2024-07-12 11:28:13.205118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.272 [2024-07-12 11:28:13.205135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.272 [2024-07-12 11:28:13.209798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.272 [2024-07-12 11:28:13.209829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.272 [2024-07-12 11:28:13.209846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.272 [2024-07-12 11:28:13.214375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.272 [2024-07-12 11:28:13.214422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.272 [2024-07-12 11:28:13.214439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.272 [2024-07-12 11:28:13.219065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.272 [2024-07-12 11:28:13.219096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.272 [2024-07-12 11:28:13.219113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.272 [2024-07-12 11:28:13.223417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.272 [2024-07-12 11:28:13.223447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.272 [2024-07-12 11:28:13.223464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.272 [2024-07-12 11:28:13.228092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.272 [2024-07-12 11:28:13.228122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.273 [2024-07-12 11:28:13.228139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.273 [2024-07-12 11:28:13.233441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.273 [2024-07-12 11:28:13.233487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.273 [2024-07-12 11:28:13.233505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.273 [2024-07-12 11:28:13.239106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.273 [2024-07-12 11:28:13.239138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.273 [2024-07-12 11:28:13.239155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.273 [2024-07-12 11:28:13.244850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.273 [2024-07-12 11:28:13.244888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.273 [2024-07-12 11:28:13.244907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.273 [2024-07-12 11:28:13.252455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.273 [2024-07-12 11:28:13.252501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.273 [2024-07-12 11:28:13.252519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.273 [2024-07-12 11:28:13.259129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.273 [2024-07-12 11:28:13.259161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.273 [2024-07-12 11:28:13.259194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.273 [2024-07-12 11:28:13.265115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.273 [2024-07-12 11:28:13.265146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.273 [2024-07-12 11:28:13.265179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.273 [2024-07-12 11:28:13.270060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.273 [2024-07-12 11:28:13.270093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.273 [2024-07-12 11:28:13.270111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.273 [2024-07-12 11:28:13.274875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.273 [2024-07-12 11:28:13.274906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.273 [2024-07-12 11:28:13.274923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.273 [2024-07-12 11:28:13.280023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.273 [2024-07-12 11:28:13.280054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.273 [2024-07-12 11:28:13.280072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.273 [2024-07-12 11:28:13.285743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.273 [2024-07-12 11:28:13.285775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.273 [2024-07-12 11:28:13.285798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.273 [2024-07-12 11:28:13.290391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.273 [2024-07-12 11:28:13.290437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.273 [2024-07-12 11:28:13.290454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.273 [2024-07-12 11:28:13.297375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.273 [2024-07-12 11:28:13.297409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.273 [2024-07-12 11:28:13.297427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.273 [2024-07-12 11:28:13.303746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.273 [2024-07-12 11:28:13.303778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.273 [2024-07-12 11:28:13.303810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.273 [2024-07-12 11:28:13.310077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.273 [2024-07-12 11:28:13.310124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.273 [2024-07-12 11:28:13.310140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.273 [2024-07-12 11:28:13.316417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.273 [2024-07-12 11:28:13.316450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.273 [2024-07-12 11:28:13.316467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.273 [2024-07-12 11:28:13.322839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.273 [2024-07-12 11:28:13.322897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.273 [2024-07-12 11:28:13.322916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.273 [2024-07-12 11:28:13.330139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.273 [2024-07-12 11:28:13.330185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.273 [2024-07-12 11:28:13.330203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.273 [2024-07-12 11:28:13.336382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.273 [2024-07-12 11:28:13.336415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.273 [2024-07-12 11:28:13.336432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.273 [2024-07-12 11:28:13.341793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.273 [2024-07-12 11:28:13.341831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.273 [2024-07-12 11:28:13.341849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.273 [2024-07-12 11:28:13.347759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.273 [2024-07-12 11:28:13.347792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.273 [2024-07-12 11:28:13.347824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.273 [2024-07-12 11:28:13.353696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.273 [2024-07-12 11:28:13.353743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.273 [2024-07-12 11:28:13.353760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.273 [2024-07-12 11:28:13.359287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.273 [2024-07-12 11:28:13.359334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.273 [2024-07-12 11:28:13.359351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.273 [2024-07-12 11:28:13.365184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.273 [2024-07-12 11:28:13.365231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.273 [2024-07-12 11:28:13.365248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.273 [2024-07-12 11:28:13.371477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.273 [2024-07-12 11:28:13.371510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.273 [2024-07-12 11:28:13.371528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.273 [2024-07-12 11:28:13.379002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.273 [2024-07-12 11:28:13.379034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.273 [2024-07-12 11:28:13.379052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.273 [2024-07-12 11:28:13.385288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.273 [2024-07-12 11:28:13.385321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.273 [2024-07-12 11:28:13.385339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.273 [2024-07-12 11:28:13.391272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.273 [2024-07-12 11:28:13.391320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.273 [2024-07-12 11:28:13.391337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.273 [2024-07-12 11:28:13.397350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.273 [2024-07-12 11:28:13.397383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.273 [2024-07-12 11:28:13.397415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.273 [2024-07-12 11:28:13.402821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.273 [2024-07-12 11:28:13.402853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.273 [2024-07-12 11:28:13.402878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.533 [2024-07-12 11:28:13.407467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.533 [2024-07-12 11:28:13.407498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.533 [2024-07-12 11:28:13.407515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.533 [2024-07-12 11:28:13.412164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.533 [2024-07-12 11:28:13.412196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.533 [2024-07-12 11:28:13.412214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.533 [2024-07-12 11:28:13.416909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.533 [2024-07-12 11:28:13.416940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.534 [2024-07-12 11:28:13.416957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.534 [2024-07-12 11:28:13.421477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.534 [2024-07-12 11:28:13.421507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.534 [2024-07-12 11:28:13.421540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.534 [2024-07-12 11:28:13.426135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.534 [2024-07-12 11:28:13.426165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.534 [2024-07-12 11:28:13.426197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.534 [2024-07-12 11:28:13.431002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.534 [2024-07-12 11:28:13.431034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.534 [2024-07-12 11:28:13.431051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.534 [2024-07-12 11:28:13.435860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.534 [2024-07-12 11:28:13.435896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.534 [2024-07-12 11:28:13.435934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.534 [2024-07-12 11:28:13.440854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.534 [2024-07-12 11:28:13.440895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.534 [2024-07-12 11:28:13.440912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.534 [2024-07-12 11:28:13.445574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.534 [2024-07-12 11:28:13.445605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.534 [2024-07-12 11:28:13.445622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.534 [2024-07-12 11:28:13.450325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.534 [2024-07-12 11:28:13.450356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.534 [2024-07-12 11:28:13.450373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.534 [2024-07-12 11:28:13.455791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.534 [2024-07-12 11:28:13.455822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.534 [2024-07-12 11:28:13.455857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.534 [2024-07-12 11:28:13.462386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.534 [2024-07-12 11:28:13.462419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.534 [2024-07-12 11:28:13.462437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.534 [2024-07-12 11:28:13.469963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.534 [2024-07-12 11:28:13.469995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.534 [2024-07-12 11:28:13.470013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.534 [2024-07-12 11:28:13.476181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.534 [2024-07-12 11:28:13.476213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.534 [2024-07-12 11:28:13.476231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.534 [2024-07-12 11:28:13.482614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.534 [2024-07-12 11:28:13.482647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.534 [2024-07-12 11:28:13.482666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.534 [2024-07-12 11:28:13.488307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.534 [2024-07-12 11:28:13.488340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.534 [2024-07-12 11:28:13.488358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.534 [2024-07-12 11:28:13.493671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.534 [2024-07-12 11:28:13.493703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.534 [2024-07-12 11:28:13.493720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.534 [2024-07-12 11:28:13.498304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.534 [2024-07-12 11:28:13.498337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.534 [2024-07-12 11:28:13.498355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.534 [2024-07-12 11:28:13.501385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.534 [2024-07-12 11:28:13.501414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.534 [2024-07-12 11:28:13.501430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.534 [2024-07-12 11:28:13.506289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.534 [2024-07-12 11:28:13.506320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.534 [2024-07-12 11:28:13.506351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.534 [2024-07-12 11:28:13.511564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.534 [2024-07-12 11:28:13.511596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.534 [2024-07-12 11:28:13.511614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.534 [2024-07-12 11:28:13.517046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.534 [2024-07-12 11:28:13.517093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.534 [2024-07-12 11:28:13.517110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.534 [2024-07-12 11:28:13.522132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.534 [2024-07-12 11:28:13.522163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.534 [2024-07-12 11:28:13.522180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.534 [2024-07-12 11:28:13.526959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.534 [2024-07-12 11:28:13.526989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.534 [2024-07-12 11:28:13.527010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.534 [2024-07-12 11:28:13.532220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.534 [2024-07-12 11:28:13.532266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.534 [2024-07-12 11:28:13.532284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.534 [2024-07-12 11:28:13.537624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.534 [2024-07-12 11:28:13.537669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.534 [2024-07-12 11:28:13.537687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.534 [2024-07-12 11:28:13.543186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.534 [2024-07-12 11:28:13.543217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.534 [2024-07-12 11:28:13.543233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.534 [2024-07-12 11:28:13.548496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.534 [2024-07-12 11:28:13.548528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.534 [2024-07-12 11:28:13.548546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.534 [2024-07-12 11:28:13.554015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.534 [2024-07-12 11:28:13.554048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.534 [2024-07-12 11:28:13.554065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.534 [2024-07-12 11:28:13.559714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.534 [2024-07-12 11:28:13.559746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.534 [2024-07-12 11:28:13.559763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.534 [2024-07-12 11:28:13.566128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.534 [2024-07-12 11:28:13.566159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.534 [2024-07-12 11:28:13.566176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.534 [2024-07-12 11:28:13.571880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.534 [2024-07-12 11:28:13.571935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.534 [2024-07-12 11:28:13.571953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.535 [2024-07-12 11:28:13.577717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.535 [2024-07-12 11:28:13.577754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.535 [2024-07-12 11:28:13.577772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.535 [2024-07-12 11:28:13.583442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.535 [2024-07-12 11:28:13.583473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.535 [2024-07-12 11:28:13.583490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.535 [2024-07-12 11:28:13.589980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.535 [2024-07-12 11:28:13.590011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.535 [2024-07-12 11:28:13.590030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.535 [2024-07-12 11:28:13.595290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.535 [2024-07-12 11:28:13.595321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.535 [2024-07-12 11:28:13.595354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.535 [2024-07-12 11:28:13.602164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.535 [2024-07-12 11:28:13.602210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.535 [2024-07-12 11:28:13.602227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.535 [2024-07-12 11:28:13.607209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.535 [2024-07-12 11:28:13.607256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.535 [2024-07-12 11:28:13.607273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.535 [2024-07-12 11:28:13.612479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.535 [2024-07-12 11:28:13.612511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.535 [2024-07-12 11:28:13.612529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.535 [2024-07-12 11:28:13.618780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.535 [2024-07-12 11:28:13.618812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.535 [2024-07-12 11:28:13.618830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.535 [2024-07-12 11:28:13.624724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.535 [2024-07-12 11:28:13.624755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.535 [2024-07-12 11:28:13.624772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.535 [2024-07-12 11:28:13.630003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.535 [2024-07-12 11:28:13.630035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.535 [2024-07-12 11:28:13.630052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.535 [2024-07-12 11:28:13.634603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.535 [2024-07-12 11:28:13.634634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.535 [2024-07-12 11:28:13.634651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.535 [2024-07-12 11:28:13.639348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.535 [2024-07-12 11:28:13.639378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.535 [2024-07-12 11:28:13.639395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.535 [2024-07-12 11:28:13.643979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.535 [2024-07-12 11:28:13.644010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.535 [2024-07-12 11:28:13.644026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.535 [2024-07-12 11:28:13.648621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.535 [2024-07-12 11:28:13.648652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.535 [2024-07-12 11:28:13.648669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.535 [2024-07-12 11:28:13.653556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.535 [2024-07-12 11:28:13.653586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.535 [2024-07-12 11:28:13.653603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.535 [2024-07-12 11:28:13.658147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.535 [2024-07-12 11:28:13.658193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.535 [2024-07-12 11:28:13.658210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.535 [2024-07-12 11:28:13.663041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.535 [2024-07-12 11:28:13.663071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.535 [2024-07-12 11:28:13.663088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.796 [2024-07-12 11:28:13.667651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.796 [2024-07-12 11:28:13.667681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.796 [2024-07-12 11:28:13.667704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.796 [2024-07-12 11:28:13.673095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.796 [2024-07-12 11:28:13.673127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.796 [2024-07-12 11:28:13.673145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.796 [2024-07-12 11:28:13.678223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.796 [2024-07-12 11:28:13.678255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.796 [2024-07-12 11:28:13.678272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.796 [2024-07-12 11:28:13.683009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.796 [2024-07-12 11:28:13.683040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.796 [2024-07-12 11:28:13.683057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.796 [2024-07-12 11:28:13.687691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.796 [2024-07-12 11:28:13.687722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.796 [2024-07-12 11:28:13.687739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.796 [2024-07-12 11:28:13.692340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.796 [2024-07-12 11:28:13.692370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.796 [2024-07-12 11:28:13.692387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.796 [2024-07-12 11:28:13.696888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.796 [2024-07-12 11:28:13.696918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.796 [2024-07-12 11:28:13.696935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.796 [2024-07-12 11:28:13.701444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.796 [2024-07-12 11:28:13.701475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.796 [2024-07-12 11:28:13.701492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.796 [2024-07-12 11:28:13.705945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.796 [2024-07-12 11:28:13.705976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.796 [2024-07-12 11:28:13.705993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.796 [2024-07-12 11:28:13.710503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.796 [2024-07-12 11:28:13.710539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.796 [2024-07-12 11:28:13.710557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.796 [2024-07-12 11:28:13.714917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.796 [2024-07-12 11:28:13.714947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.796 [2024-07-12 11:28:13.714965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.796 [2024-07-12 11:28:13.718525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.796 [2024-07-12 11:28:13.718557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.796 [2024-07-12 11:28:13.718573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.796 [2024-07-12 11:28:13.721804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.796 [2024-07-12 11:28:13.721849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.796 [2024-07-12 11:28:13.721872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.796 [2024-07-12 11:28:13.726824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.796 [2024-07-12 11:28:13.726855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.796 [2024-07-12 11:28:13.726881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.796 [2024-07-12 11:28:13.731599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.796 [2024-07-12 11:28:13.731628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.796 [2024-07-12 11:28:13.731660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.796 [2024-07-12 11:28:13.737039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.796 [2024-07-12 11:28:13.737070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.796 [2024-07-12 11:28:13.737088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.796 [2024-07-12 11:28:13.741722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.796 [2024-07-12 11:28:13.741753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.796 [2024-07-12 11:28:13.741771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.797 [2024-07-12 11:28:13.746302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.797 [2024-07-12 11:28:13.746333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.797 [2024-07-12 11:28:13.746355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.797 [2024-07-12 11:28:13.750954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.797 [2024-07-12 11:28:13.750984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.797 [2024-07-12 11:28:13.751001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.797 [2024-07-12 11:28:13.755434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.797 [2024-07-12 11:28:13.755464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.797 [2024-07-12 11:28:13.755481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.797 [2024-07-12 11:28:13.760372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.797 [2024-07-12 11:28:13.760402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.797 [2024-07-12 11:28:13.760420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.797 [2024-07-12 11:28:13.765762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.797 [2024-07-12 11:28:13.765792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.797 [2024-07-12 11:28:13.765809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.797 [2024-07-12 11:28:13.770208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.797 [2024-07-12 11:28:13.770240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.797 [2024-07-12 11:28:13.770257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.797 [2024-07-12 11:28:13.774845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.797 [2024-07-12 11:28:13.774900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.797 [2024-07-12 11:28:13.774917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.797 [2024-07-12 11:28:13.780421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.797 [2024-07-12 11:28:13.780453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.797 [2024-07-12 11:28:13.780471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.797 [2024-07-12 11:28:13.785313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.797 [2024-07-12 11:28:13.785345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.797 [2024-07-12 11:28:13.785362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.797 [2024-07-12 11:28:13.789808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.797 [2024-07-12 11:28:13.789844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.797 [2024-07-12 11:28:13.789886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.797 [2024-07-12 11:28:13.794349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.797 [2024-07-12 11:28:13.794379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.797 [2024-07-12 11:28:13.794396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.797 [2024-07-12 11:28:13.799161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.797 [2024-07-12 11:28:13.799207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.797 [2024-07-12 11:28:13.799224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.797 [2024-07-12 11:28:13.803666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.797 [2024-07-12 11:28:13.803698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.797 [2024-07-12 11:28:13.803715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.797 [2024-07-12 11:28:13.808413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.797 [2024-07-12 11:28:13.808443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.797 [2024-07-12 11:28:13.808460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.797 [2024-07-12 11:28:13.812886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.797 [2024-07-12 11:28:13.812915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.797 [2024-07-12 11:28:13.812932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.797 [2024-07-12 11:28:13.817594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.797 [2024-07-12 11:28:13.817622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.797 [2024-07-12 11:28:13.817652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.797 [2024-07-12 11:28:13.822329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.797 [2024-07-12 11:28:13.822359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.797 [2024-07-12 11:28:13.822390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.797 [2024-07-12 11:28:13.828110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.797 [2024-07-12 11:28:13.828140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.797 [2024-07-12 11:28:13.828158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.797 [2024-07-12 11:28:13.833119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.797 [2024-07-12 11:28:13.833152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.797 [2024-07-12 11:28:13.833170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.797 [2024-07-12 11:28:13.838053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.797 [2024-07-12 11:28:13.838083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.797 [2024-07-12 11:28:13.838101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.797 [2024-07-12 11:28:13.843580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.797 [2024-07-12 11:28:13.843626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.797 [2024-07-12 11:28:13.843643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.797 [2024-07-12 11:28:13.850270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.797 [2024-07-12 11:28:13.850302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.797 [2024-07-12 11:28:13.850320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.797 [2024-07-12 11:28:13.855873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.797 [2024-07-12 11:28:13.855904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.797 [2024-07-12 11:28:13.855922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.797 [2024-07-12 11:28:13.861484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.797 [2024-07-12 11:28:13.861517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.797 [2024-07-12 11:28:13.861535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.797 [2024-07-12 11:28:13.866893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.797 [2024-07-12 11:28:13.866924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.797 [2024-07-12 11:28:13.866942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.797 [2024-07-12 11:28:13.871471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.797 [2024-07-12 11:28:13.871502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.797 [2024-07-12 11:28:13.871519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.797 [2024-07-12 11:28:13.875998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.797 [2024-07-12 11:28:13.876028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.797 [2024-07-12 11:28:13.876050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.797 [2024-07-12 11:28:13.880568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.797 [2024-07-12 11:28:13.880599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.797 [2024-07-12 11:28:13.880631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.797 [2024-07-12 11:28:13.886437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.797 [2024-07-12 11:28:13.886485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.797 [2024-07-12 11:28:13.886503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.798 [2024-07-12 11:28:13.893425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.798 [2024-07-12 11:28:13.893456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.798 [2024-07-12 11:28:13.893489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.798 [2024-07-12 11:28:13.900331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.798 [2024-07-12 11:28:13.900362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.798 [2024-07-12 11:28:13.900379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:47.798 [2024-07-12 11:28:13.906688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.798 [2024-07-12 11:28:13.906734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.798 [2024-07-12 11:28:13.906751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:47.798 [2024-07-12 11:28:13.912842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.798 [2024-07-12 11:28:13.912882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.798 [2024-07-12 11:28:13.912902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.798 [2024-07-12 11:28:13.918919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.798 [2024-07-12 11:28:13.918951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.798 [2024-07-12 11:28:13.918970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:47.798 [2024-07-12 11:28:13.924247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:47.798 [2024-07-12 11:28:13.924278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.798 [2024-07-12 11:28:13.924295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.058 [2024-07-12 11:28:13.929717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.058 [2024-07-12 11:28:13.929755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.058 [2024-07-12 11:28:13.929773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.058 [2024-07-12 11:28:13.936383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.058 [2024-07-12 11:28:13.936414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.058 [2024-07-12 11:28:13.936432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.058 [2024-07-12 11:28:13.943970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.058 [2024-07-12 11:28:13.944001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.058 [2024-07-12 11:28:13.944018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.058 [2024-07-12 11:28:13.950617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.058 [2024-07-12 11:28:13.950649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.058 [2024-07-12 11:28:13.950666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.058 [2024-07-12 11:28:13.957411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.058 [2024-07-12 11:28:13.957443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.058 [2024-07-12 11:28:13.957461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.058 [2024-07-12 11:28:13.962133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.058 [2024-07-12 11:28:13.962166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.058 [2024-07-12 11:28:13.962183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.058 [2024-07-12 11:28:13.965066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.058 [2024-07-12 11:28:13.965097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.058 [2024-07-12 11:28:13.965114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.058 [2024-07-12 11:28:13.969861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.058 [2024-07-12 11:28:13.969899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.058 [2024-07-12 11:28:13.969916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.058 [2024-07-12 11:28:13.974943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.058 [2024-07-12 11:28:13.974974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.058 [2024-07-12 11:28:13.974992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.058 [2024-07-12 11:28:13.980241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.058 [2024-07-12 11:28:13.980287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.058 [2024-07-12 11:28:13.980305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.058 [2024-07-12 11:28:13.984951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.058 [2024-07-12 11:28:13.984982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.058 [2024-07-12 11:28:13.984999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.058 [2024-07-12 11:28:13.989642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.058 [2024-07-12 11:28:13.989672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.058 [2024-07-12 11:28:13.989689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.058 [2024-07-12 11:28:13.994179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.058 [2024-07-12 11:28:13.994209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.058 [2024-07-12 11:28:13.994226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.058 [2024-07-12 11:28:13.998906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.058 [2024-07-12 11:28:13.998935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.058 [2024-07-12 11:28:13.998952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.058 [2024-07-12 11:28:14.003648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.058 [2024-07-12 11:28:14.003677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.058 [2024-07-12 11:28:14.003694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.058 [2024-07-12 11:28:14.008675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.058 [2024-07-12 11:28:14.008707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.058 [2024-07-12 11:28:14.008724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.058 [2024-07-12 11:28:14.014244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.058 [2024-07-12 11:28:14.014275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.058 [2024-07-12 11:28:14.014292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.058 [2024-07-12 11:28:14.019636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.058 [2024-07-12 11:28:14.019668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.058 [2024-07-12 11:28:14.019691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.058 [2024-07-12 11:28:14.026244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.058 [2024-07-12 11:28:14.026275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.058 [2024-07-12 11:28:14.026293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.058 [2024-07-12 11:28:14.033667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.058 [2024-07-12 11:28:14.033699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.058 [2024-07-12 11:28:14.033732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.058 [2024-07-12 11:28:14.039420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.058 [2024-07-12 11:28:14.039452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.058 [2024-07-12 11:28:14.039469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.058 [2024-07-12 11:28:14.045458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.058 [2024-07-12 11:28:14.045490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.058 [2024-07-12 11:28:14.045507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.058 [2024-07-12 11:28:14.050537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.058 [2024-07-12 11:28:14.050568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.058 [2024-07-12 11:28:14.050585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.058 [2024-07-12 11:28:14.055373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.058 [2024-07-12 11:28:14.055404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.058 [2024-07-12 11:28:14.055440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.058 [2024-07-12 11:28:14.060262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.058 [2024-07-12 11:28:14.060293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.058 [2024-07-12 11:28:14.060310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.058 [2024-07-12 11:28:14.064971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.058 [2024-07-12 11:28:14.065001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.058 [2024-07-12 11:28:14.065018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.058 [2024-07-12 11:28:14.070795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.058 [2024-07-12 11:28:14.070828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.058 [2024-07-12 11:28:14.070845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.058 [2024-07-12 11:28:14.076710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.058 [2024-07-12 11:28:14.076742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.058 [2024-07-12 11:28:14.076759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.059 [2024-07-12 11:28:14.081889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.059 [2024-07-12 11:28:14.081928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.059 [2024-07-12 11:28:14.081945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.059 [2024-07-12 11:28:14.088146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.059 [2024-07-12 11:28:14.088179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.059 [2024-07-12 11:28:14.088197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.059 [2024-07-12 11:28:14.094188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.059 [2024-07-12 11:28:14.094220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.059 [2024-07-12 11:28:14.094238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.059 [2024-07-12 11:28:14.099397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.059 [2024-07-12 11:28:14.099431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.059 [2024-07-12 11:28:14.099450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.059 [2024-07-12 11:28:14.106018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.059 [2024-07-12 11:28:14.106051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.059 [2024-07-12 11:28:14.106069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.059 [2024-07-12 11:28:14.111651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.059 [2024-07-12 11:28:14.111684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.059 [2024-07-12 11:28:14.111702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.059 [2024-07-12 11:28:14.117217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.059 [2024-07-12 11:28:14.117249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.059 [2024-07-12 11:28:14.117273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.059 [2024-07-12 11:28:14.122915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.059 [2024-07-12 11:28:14.122947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.059 [2024-07-12 11:28:14.122964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.059 [2024-07-12 11:28:14.128317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.059 [2024-07-12 11:28:14.128349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.059 [2024-07-12 11:28:14.128366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.059 [2024-07-12 11:28:14.134114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.059 [2024-07-12 11:28:14.134147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.059 [2024-07-12 11:28:14.134164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.059 [2024-07-12 11:28:14.139780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.059 [2024-07-12 11:28:14.139813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.059 [2024-07-12 11:28:14.139831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.059 [2024-07-12 11:28:14.145348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.059 [2024-07-12 11:28:14.145379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.059 [2024-07-12 11:28:14.145397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.059 [2024-07-12 11:28:14.150718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.059 [2024-07-12 11:28:14.150750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.059 [2024-07-12 11:28:14.150767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.059 [2024-07-12 11:28:14.156451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.059 [2024-07-12 11:28:14.156484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.059 [2024-07-12 11:28:14.156501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.059 [2024-07-12 11:28:14.162134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.059 [2024-07-12 11:28:14.162167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.059 [2024-07-12 11:28:14.162184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.059 [2024-07-12 11:28:14.167541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.059 [2024-07-12 11:28:14.167579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.059 [2024-07-12 11:28:14.167597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.059 [2024-07-12 11:28:14.173078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.059 [2024-07-12 11:28:14.173109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.059 [2024-07-12 11:28:14.173127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.059 [2024-07-12 11:28:14.178746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.059 [2024-07-12 11:28:14.178778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.059 [2024-07-12 11:28:14.178796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.059 [2024-07-12 11:28:14.184377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.059 [2024-07-12 11:28:14.184410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.059 [2024-07-12 11:28:14.184427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.317 [2024-07-12 11:28:14.189948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.317 [2024-07-12 11:28:14.189979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.317 [2024-07-12 11:28:14.189997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.317 [2024-07-12 11:28:14.195402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.317 [2024-07-12 11:28:14.195435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.317 [2024-07-12 11:28:14.195453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.317 [2024-07-12 11:28:14.201279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.317 [2024-07-12 11:28:14.201310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.317 [2024-07-12 11:28:14.201327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.317 [2024-07-12 11:28:14.207242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.317 [2024-07-12 11:28:14.207274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.317 [2024-07-12 11:28:14.207292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.317 [2024-07-12 11:28:14.213027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.317 [2024-07-12 11:28:14.213058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.317 [2024-07-12 11:28:14.213076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.317 [2024-07-12 11:28:14.218573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.317 [2024-07-12 11:28:14.218604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.317 [2024-07-12 11:28:14.218622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.317 [2024-07-12 11:28:14.224038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.317 [2024-07-12 11:28:14.224080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.317 [2024-07-12 11:28:14.224098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.317 [2024-07-12 11:28:14.229745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.317 [2024-07-12 11:28:14.229778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.317 [2024-07-12 11:28:14.229795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.317 [2024-07-12 11:28:14.235678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.317 [2024-07-12 11:28:14.235709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.317 [2024-07-12 11:28:14.235727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:48.317 [2024-07-12 11:28:14.241453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.317 [2024-07-12 11:28:14.241484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.317 [2024-07-12 11:28:14.241502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:48.317 [2024-07-12 11:28:14.247007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.318 [2024-07-12 11:28:14.247039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.318 [2024-07-12 11:28:14.247056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:48.318 [2024-07-12 11:28:14.253096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24034f0) 00:23:48.318 [2024-07-12 11:28:14.253127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.318 [2024-07-12 11:28:14.253145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.318 00:23:48.318 Latency(us) 00:23:48.318 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.318 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:48.318 nvme0n1 : 2.00 5781.02 722.63 0.00 0.00 2762.98 700.87 8204.14 00:23:48.318 =================================================================================================================== 00:23:48.318 Total : 5781.02 722.63 0.00 0.00 2762.98 700.87 8204.14 00:23:48.318 0 00:23:48.318 11:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:48.318 11:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:48.318 11:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:48.318 | .driver_specific 00:23:48.318 | .nvme_error 00:23:48.318 | .status_code 00:23:48.318 | .command_transient_transport_error' 00:23:48.318 11:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:48.576 11:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 373 > 0 )) 00:23:48.576 11:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 673448 00:23:48.576 11:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 673448 ']' 00:23:48.576 11:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 673448 00:23:48.576 11:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:23:48.576 11:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:48.576 11:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 673448 00:23:48.576 11:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:48.576 11:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:48.576 11:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 673448' 00:23:48.576 killing process with pid 673448 00:23:48.576 11:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 673448 00:23:48.576 Received shutdown signal, test time was about 2.000000 seconds 00:23:48.576 00:23:48.576 Latency(us) 00:23:48.576 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.576 =================================================================================================================== 00:23:48.576 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:48.576 11:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 673448 00:23:48.834 11:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:23:48.834 11:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:48.834 11:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:23:48.834 11:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:23:48.834 11:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:23:48.834 11:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=673837 00:23:48.834 11:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:23:48.834 11:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 673837 /var/tmp/bperf.sock 00:23:48.834 11:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 673837 ']' 00:23:48.834 11:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:48.834 11:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:48.834 11:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:48.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:48.834 11:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:48.834 11:28:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:48.834 [2024-07-12 11:28:14.859846] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:23:48.834 [2024-07-12 11:28:14.859970] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid673837 ] 00:23:48.834 EAL: No free 2048 kB hugepages reported on node 1 00:23:48.834 [2024-07-12 11:28:14.916895] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.091 [2024-07-12 11:28:15.019988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:49.091 11:28:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:49.091 11:28:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:23:49.091 11:28:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:49.091 11:28:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:49.348 11:28:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:49.348 11:28:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.348 11:28:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:49.348 11:28:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.348 11:28:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:49.348 11:28:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:49.914 nvme0n1 00:23:49.914 11:28:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:49.914 11:28:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.914 11:28:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:49.914 11:28:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.914 11:28:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:49.914 11:28:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:49.914 Running I/O for 2 seconds... 00:23:49.914 [2024-07-12 11:28:16.011757] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190ed920 00:23:49.914 [2024-07-12 11:28:16.012766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.914 [2024-07-12 11:28:16.012817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:49.914 [2024-07-12 11:28:16.023924] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190fb048 00:23:49.914 [2024-07-12 11:28:16.024772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.914 [2024-07-12 11:28:16.024802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:49.914 [2024-07-12 11:28:16.035909] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e7c50 00:23:49.914 [2024-07-12 11:28:16.036993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:49.914 [2024-07-12 11:28:16.037037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:50.172 [2024-07-12 11:28:16.047987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190eea00 00:23:50.172 [2024-07-12 11:28:16.049336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.172 [2024-07-12 11:28:16.049366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:50.172 [2024-07-12 11:28:16.060187] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e84c0 00:23:50.172 [2024-07-12 11:28:16.061407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.172 [2024-07-12 11:28:16.061451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:50.172 [2024-07-12 11:28:16.072239] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190dfdc0 00:23:50.172 [2024-07-12 11:28:16.073284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.172 [2024-07-12 11:28:16.073314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:50.172 [2024-07-12 11:28:16.082961] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190feb58 00:23:50.172 [2024-07-12 11:28:16.084126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.172 [2024-07-12 11:28:16.084155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:50.172 [2024-07-12 11:28:16.094350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190ee190 00:23:50.172 [2024-07-12 11:28:16.095364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.172 [2024-07-12 11:28:16.095407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:50.172 [2024-07-12 11:28:16.106451] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190ee5c8 00:23:50.172 [2024-07-12 11:28:16.107605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.172 [2024-07-12 11:28:16.107649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:50.172 [2024-07-12 11:28:16.117543] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f2948 00:23:50.172 [2024-07-12 11:28:16.118458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.173 [2024-07-12 11:28:16.118501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:50.173 [2024-07-12 11:28:16.129683] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e4140 00:23:50.173 [2024-07-12 11:28:16.130590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.173 [2024-07-12 11:28:16.130618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:50.173 [2024-07-12 11:28:16.140943] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190ee190 00:23:50.173 [2024-07-12 11:28:16.142111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.173 [2024-07-12 11:28:16.142145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:50.173 [2024-07-12 11:28:16.152660] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e23b8 00:23:50.173 [2024-07-12 11:28:16.153682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.173 [2024-07-12 11:28:16.153725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:50.173 [2024-07-12 11:28:16.166039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e4578 00:23:50.173 [2024-07-12 11:28:16.167584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.173 [2024-07-12 11:28:16.167626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:50.173 [2024-07-12 11:28:16.178120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e3060 00:23:50.173 [2024-07-12 11:28:16.179740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.173 [2024-07-12 11:28:16.179783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:50.173 [2024-07-12 11:28:16.190444] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190fa7d8 00:23:50.173 [2024-07-12 11:28:16.192289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.173 [2024-07-12 11:28:16.192332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:50.173 [2024-07-12 11:28:16.199580] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f0350 00:23:50.173 [2024-07-12 11:28:16.200751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.173 [2024-07-12 11:28:16.200794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:50.173 [2024-07-12 11:28:16.211695] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e9e10 00:23:50.173 [2024-07-12 11:28:16.213014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.173 [2024-07-12 11:28:16.213057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:50.173 [2024-07-12 11:28:16.223188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e3d08 00:23:50.173 [2024-07-12 11:28:16.224268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.173 [2024-07-12 11:28:16.224297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:50.173 [2024-07-12 11:28:16.236690] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f8618 00:23:50.173 [2024-07-12 11:28:16.238613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.173 [2024-07-12 11:28:16.238655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:50.173 [2024-07-12 11:28:16.245975] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f96f8 00:23:50.173 [2024-07-12 11:28:16.247139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.173 [2024-07-12 11:28:16.247180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:50.173 [2024-07-12 11:28:16.259897] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190fd208 00:23:50.173 [2024-07-12 11:28:16.261713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.173 [2024-07-12 11:28:16.261755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:50.173 [2024-07-12 11:28:16.268124] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190fb048 00:23:50.173 [2024-07-12 11:28:16.269081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.173 [2024-07-12 11:28:16.269123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:50.173 [2024-07-12 11:28:16.279970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f3e60 00:23:50.173 [2024-07-12 11:28:16.280959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.173 [2024-07-12 11:28:16.281002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:50.173 [2024-07-12 11:28:16.293601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f8618 00:23:50.173 [2024-07-12 11:28:16.295138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.173 [2024-07-12 11:28:16.295166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:50.431 [2024-07-12 11:28:16.305799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e0a68 00:23:50.431 [2024-07-12 11:28:16.307560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.431 [2024-07-12 11:28:16.307603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:50.431 [2024-07-12 11:28:16.313835] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f8e88 00:23:50.431 [2024-07-12 11:28:16.314617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.431 [2024-07-12 11:28:16.314658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:50.431 [2024-07-12 11:28:16.326009] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f9f68 00:23:50.431 [2024-07-12 11:28:16.326942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.431 [2024-07-12 11:28:16.326971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:50.431 [2024-07-12 11:28:16.340033] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190eaab8 00:23:50.431 [2024-07-12 11:28:16.341399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.431 [2024-07-12 11:28:16.341442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:50.431 [2024-07-12 11:28:16.348690] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e49b0 00:23:50.431 [2024-07-12 11:28:16.349418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.431 [2024-07-12 11:28:16.349463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:50.431 [2024-07-12 11:28:16.361762] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f6890 00:23:50.431 [2024-07-12 11:28:16.362694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.431 [2024-07-12 11:28:16.362724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:50.431 [2024-07-12 11:28:16.372583] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190de038 00:23:50.431 [2024-07-12 11:28:16.373471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.431 [2024-07-12 11:28:16.373515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:50.431 [2024-07-12 11:28:16.385034] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f0350 00:23:50.431 [2024-07-12 11:28:16.386115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.431 [2024-07-12 11:28:16.386145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:50.431 [2024-07-12 11:28:16.396753] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190ec840 00:23:50.431 [2024-07-12 11:28:16.397460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.431 [2024-07-12 11:28:16.397489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:50.431 [2024-07-12 11:28:16.408946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190eee38 00:23:50.431 [2024-07-12 11:28:16.409756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.431 [2024-07-12 11:28:16.409785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:50.431 [2024-07-12 11:28:16.422795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f81e0 00:23:50.431 [2024-07-12 11:28:16.424627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.431 [2024-07-12 11:28:16.424669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:50.431 [2024-07-12 11:28:16.431062] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190ef270 00:23:50.431 [2024-07-12 11:28:16.431802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.431 [2024-07-12 11:28:16.431844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:50.431 [2024-07-12 11:28:16.441880] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190ea248 00:23:50.431 [2024-07-12 11:28:16.442628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.431 [2024-07-12 11:28:16.442674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:50.431 [2024-07-12 11:28:16.454702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e7818 00:23:50.431 [2024-07-12 11:28:16.455403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.431 [2024-07-12 11:28:16.455433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:50.431 [2024-07-12 11:28:16.467994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e49b0 00:23:50.431 [2024-07-12 11:28:16.469327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.431 [2024-07-12 11:28:16.469370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:50.431 [2024-07-12 11:28:16.476593] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f0788 00:23:50.431 [2024-07-12 11:28:16.477353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.431 [2024-07-12 11:28:16.477395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:50.431 [2024-07-12 11:28:16.488385] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e99d8 00:23:50.431 [2024-07-12 11:28:16.489106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.431 [2024-07-12 11:28:16.489150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:50.431 [2024-07-12 11:28:16.502121] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e8d30 00:23:50.431 [2024-07-12 11:28:16.503356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.431 [2024-07-12 11:28:16.503400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:50.431 [2024-07-12 11:28:16.512942] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e1f80 00:23:50.431 [2024-07-12 11:28:16.514619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.431 [2024-07-12 11:28:16.514648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:50.431 [2024-07-12 11:28:16.524644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190fc998 00:23:50.431 [2024-07-12 11:28:16.525977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.431 [2024-07-12 11:28:16.526006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:50.431 [2024-07-12 11:28:16.536283] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190df550 00:23:50.431 [2024-07-12 11:28:16.537400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.432 [2024-07-12 11:28:16.537442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:50.432 [2024-07-12 11:28:16.548074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e9168 00:23:50.432 [2024-07-12 11:28:16.549130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.432 [2024-07-12 11:28:16.549173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:50.432 [2024-07-12 11:28:16.560021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190dfdc0 00:23:50.432 [2024-07-12 11:28:16.561555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.432 [2024-07-12 11:28:16.561584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:50.690 [2024-07-12 11:28:16.572672] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f35f0 00:23:50.690 [2024-07-12 11:28:16.574107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.690 [2024-07-12 11:28:16.574150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:50.690 [2024-07-12 11:28:16.582998] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190fd208 00:23:50.690 [2024-07-12 11:28:16.584708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:10412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.690 [2024-07-12 11:28:16.584737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:50.690 [2024-07-12 11:28:16.594802] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f0bc0 00:23:50.690 [2024-07-12 11:28:16.596155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.690 [2024-07-12 11:28:16.596184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:50.690 [2024-07-12 11:28:16.606443] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f2510 00:23:50.690 [2024-07-12 11:28:16.607609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.690 [2024-07-12 11:28:16.607652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:50.690 [2024-07-12 11:28:16.618561] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e0630 00:23:50.690 [2024-07-12 11:28:16.619901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.690 [2024-07-12 11:28:16.619944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:50.690 [2024-07-12 11:28:16.629228] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f1ca0 00:23:50.690 [2024-07-12 11:28:16.630329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.690 [2024-07-12 11:28:16.630359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:50.690 [2024-07-12 11:28:16.640599] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f5be8 00:23:50.690 [2024-07-12 11:28:16.641572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.690 [2024-07-12 11:28:16.641615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:50.690 [2024-07-12 11:28:16.652524] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e9e10 00:23:50.690 [2024-07-12 11:28:16.653536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.690 [2024-07-12 11:28:16.653564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:50.690 [2024-07-12 11:28:16.664591] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e5a90 00:23:50.690 [2024-07-12 11:28:16.665741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.690 [2024-07-12 11:28:16.665784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:50.690 [2024-07-12 11:28:16.675676] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f96f8 00:23:50.690 [2024-07-12 11:28:16.676778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.690 [2024-07-12 11:28:16.676822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:50.690 [2024-07-12 11:28:16.687372] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190de038 00:23:50.690 [2024-07-12 11:28:16.688473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.690 [2024-07-12 11:28:16.688516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:50.690 [2024-07-12 11:28:16.699486] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f35f0 00:23:50.690 [2024-07-12 11:28:16.700620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.690 [2024-07-12 11:28:16.700662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:50.690 [2024-07-12 11:28:16.711573] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f3e60 00:23:50.690 [2024-07-12 11:28:16.712853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.690 [2024-07-12 11:28:16.712902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:50.690 [2024-07-12 11:28:16.722378] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e3d08 00:23:50.690 [2024-07-12 11:28:16.723456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.690 [2024-07-12 11:28:16.723484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:50.690 [2024-07-12 11:28:16.734380] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e8d30 00:23:50.690 [2024-07-12 11:28:16.735557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.690 [2024-07-12 11:28:16.735600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:50.690 [2024-07-12 11:28:16.745785] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e3060 00:23:50.690 [2024-07-12 11:28:16.747102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.690 [2024-07-12 11:28:16.747149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:50.690 [2024-07-12 11:28:16.757421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f0ff8 00:23:50.690 [2024-07-12 11:28:16.758515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.690 [2024-07-12 11:28:16.758544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:50.690 [2024-07-12 11:28:16.769060] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190fd208 00:23:50.690 [2024-07-12 11:28:16.770481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.690 [2024-07-12 11:28:16.770523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:50.690 [2024-07-12 11:28:16.779550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e99d8 00:23:50.690 [2024-07-12 11:28:16.781106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.690 [2024-07-12 11:28:16.781134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:50.690 [2024-07-12 11:28:16.791664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190ff3c8 00:23:50.690 [2024-07-12 11:28:16.792609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.690 [2024-07-12 11:28:16.792638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:50.690 [2024-07-12 11:28:16.802546] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f8a50 00:23:50.690 [2024-07-12 11:28:16.803236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.690 [2024-07-12 11:28:16.803266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:50.690 [2024-07-12 11:28:16.813642] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190ebb98 00:23:50.690 [2024-07-12 11:28:16.814758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.691 [2024-07-12 11:28:16.814787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:50.949 [2024-07-12 11:28:16.825248] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e7c50 00:23:50.949 [2024-07-12 11:28:16.826083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.949 [2024-07-12 11:28:16.826112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:50.949 [2024-07-12 11:28:16.837354] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e95a0 00:23:50.949 [2024-07-12 11:28:16.838024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.949 [2024-07-12 11:28:16.838054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:50.949 [2024-07-12 11:28:16.851454] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190fa3a0 00:23:50.949 [2024-07-12 11:28:16.853084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.949 [2024-07-12 11:28:16.853127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:50.949 [2024-07-12 11:28:16.863556] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190fd208 00:23:50.949 [2024-07-12 11:28:16.865325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.949 [2024-07-12 11:28:16.865368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:50.949 [2024-07-12 11:28:16.871681] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f9f68 00:23:50.949 [2024-07-12 11:28:16.872486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.949 [2024-07-12 11:28:16.872528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:50.949 [2024-07-12 11:28:16.883749] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e23b8 00:23:50.949 [2024-07-12 11:28:16.884494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.949 [2024-07-12 11:28:16.884537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:50.949 [2024-07-12 11:28:16.895702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f7538 00:23:50.949 [2024-07-12 11:28:16.896651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.949 [2024-07-12 11:28:16.896694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:50.949 [2024-07-12 11:28:16.907022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e6b70 00:23:50.949 [2024-07-12 11:28:16.908028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.949 [2024-07-12 11:28:16.908073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:50.949 [2024-07-12 11:28:16.919055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f7538 00:23:50.949 [2024-07-12 11:28:16.920306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.949 [2024-07-12 11:28:16.920350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:50.949 [2024-07-12 11:28:16.930822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e8088 00:23:50.949 [2024-07-12 11:28:16.932042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.949 [2024-07-12 11:28:16.932071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:50.949 [2024-07-12 11:28:16.942119] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f2510 00:23:50.949 [2024-07-12 11:28:16.942941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.949 [2024-07-12 11:28:16.942970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:50.949 [2024-07-12 11:28:16.954219] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190ee5c8 00:23:50.949 [2024-07-12 11:28:16.955272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.949 [2024-07-12 11:28:16.955302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:50.949 [2024-07-12 11:28:16.965187] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f92c0 00:23:50.949 [2024-07-12 11:28:16.966891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.950 [2024-07-12 11:28:16.966920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:50.950 [2024-07-12 11:28:16.977339] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190fb480 00:23:50.950 [2024-07-12 11:28:16.978709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.950 [2024-07-12 11:28:16.978750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:50.950 [2024-07-12 11:28:16.988139] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f1868 00:23:50.950 [2024-07-12 11:28:16.989035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.950 [2024-07-12 11:28:16.989062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:50.950 [2024-07-12 11:28:16.999856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e27f0 00:23:50.950 [2024-07-12 11:28:17.000670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.950 [2024-07-12 11:28:17.000698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:50.950 [2024-07-12 11:28:17.011423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e01f8 00:23:50.950 [2024-07-12 11:28:17.012654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.950 [2024-07-12 11:28:17.012696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:50.950 [2024-07-12 11:28:17.023076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f57b0 00:23:50.950 [2024-07-12 11:28:17.024296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.950 [2024-07-12 11:28:17.024326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:50.950 [2024-07-12 11:28:17.034556] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190edd58 00:23:50.950 [2024-07-12 11:28:17.035629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:18505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.950 [2024-07-12 11:28:17.035672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:50.950 [2024-07-12 11:28:17.046368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190ea680 00:23:50.950 [2024-07-12 11:28:17.047420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.950 [2024-07-12 11:28:17.047463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:50.950 [2024-07-12 11:28:17.058368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e4de8 00:23:50.950 [2024-07-12 11:28:17.059617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.950 [2024-07-12 11:28:17.059658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:50.950 [2024-07-12 11:28:17.069413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f2510 00:23:50.950 [2024-07-12 11:28:17.070652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:50.950 [2024-07-12 11:28:17.070694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:51.208 [2024-07-12 11:28:17.082065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e95a0 00:23:51.208 [2024-07-12 11:28:17.083540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.208 [2024-07-12 11:28:17.083583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:51.208 [2024-07-12 11:28:17.093271] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e5658 00:23:51.208 [2024-07-12 11:28:17.094188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.208 [2024-07-12 11:28:17.094217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:51.208 [2024-07-12 11:28:17.107485] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f8618 00:23:51.208 [2024-07-12 11:28:17.109256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.208 [2024-07-12 11:28:17.109299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:51.208 [2024-07-12 11:28:17.115736] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190fdeb0 00:23:51.208 [2024-07-12 11:28:17.116576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.208 [2024-07-12 11:28:17.116619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:51.208 [2024-07-12 11:28:17.126850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f2510 00:23:51.208 [2024-07-12 11:28:17.127684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.208 [2024-07-12 11:28:17.127727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:51.208 [2024-07-12 11:28:17.138932] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f46d0 00:23:51.208 [2024-07-12 11:28:17.139777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.208 [2024-07-12 11:28:17.139819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:51.208 [2024-07-12 11:28:17.150988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190eff18 00:23:51.208 [2024-07-12 11:28:17.151985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.208 [2024-07-12 11:28:17.152033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:51.208 [2024-07-12 11:28:17.162835] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190fda78 00:23:51.208 [2024-07-12 11:28:17.163559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.208 [2024-07-12 11:28:17.163588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:51.208 [2024-07-12 11:28:17.174836] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e1b48 00:23:51.208 [2024-07-12 11:28:17.175720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.208 [2024-07-12 11:28:17.175750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:51.208 [2024-07-12 11:28:17.185997] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f7970 00:23:51.208 [2024-07-12 11:28:17.186694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.208 [2024-07-12 11:28:17.186723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:51.208 [2024-07-12 11:28:17.197950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f81e0 00:23:51.208 [2024-07-12 11:28:17.198805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.208 [2024-07-12 11:28:17.198832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:51.208 [2024-07-12 11:28:17.209968] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e5220 00:23:51.208 [2024-07-12 11:28:17.211000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.208 [2024-07-12 11:28:17.211028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:51.208 [2024-07-12 11:28:17.220912] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f3e60 00:23:51.208 [2024-07-12 11:28:17.222660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.208 [2024-07-12 11:28:17.222688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:51.208 [2024-07-12 11:28:17.230941] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e6738 00:23:51.208 [2024-07-12 11:28:17.231799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.208 [2024-07-12 11:28:17.231841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:51.208 [2024-07-12 11:28:17.243527] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e84c0 00:23:51.208 [2024-07-12 11:28:17.244217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.208 [2024-07-12 11:28:17.244246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:51.208 [2024-07-12 11:28:17.256926] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190dfdc0 00:23:51.208 [2024-07-12 11:28:17.258435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.208 [2024-07-12 11:28:17.258479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:51.208 [2024-07-12 11:28:17.267831] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190efae0 00:23:51.208 [2024-07-12 11:28:17.268956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.208 [2024-07-12 11:28:17.268999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:51.208 [2024-07-12 11:28:17.279501] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e3d08 00:23:51.208 [2024-07-12 11:28:17.280554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.208 [2024-07-12 11:28:17.280583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:51.208 [2024-07-12 11:28:17.292882] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e7c50 00:23:51.208 [2024-07-12 11:28:17.294719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.208 [2024-07-12 11:28:17.294762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:51.208 [2024-07-12 11:28:17.301998] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190fbcf0 00:23:51.208 [2024-07-12 11:28:17.303134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.208 [2024-07-12 11:28:17.303176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:51.208 [2024-07-12 11:28:17.314074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190dece0 00:23:51.208 [2024-07-12 11:28:17.315450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.208 [2024-07-12 11:28:17.315494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:51.208 [2024-07-12 11:28:17.324738] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f31b8 00:23:51.208 [2024-07-12 11:28:17.325893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.208 [2024-07-12 11:28:17.325922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:51.208 [2024-07-12 11:28:17.336470] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e12d8 00:23:51.208 [2024-07-12 11:28:17.337533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.209 [2024-07-12 11:28:17.337575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:51.467 [2024-07-12 11:28:17.349025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190fe720 00:23:51.467 [2024-07-12 11:28:17.350158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.467 [2024-07-12 11:28:17.350197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:51.467 [2024-07-12 11:28:17.360012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e95a0 00:23:51.467 [2024-07-12 11:28:17.361108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.467 [2024-07-12 11:28:17.361160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:51.467 [2024-07-12 11:28:17.372071] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e12d8 00:23:51.467 [2024-07-12 11:28:17.373335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.467 [2024-07-12 11:28:17.373379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:51.467 [2024-07-12 11:28:17.384321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190efae0 00:23:51.467 [2024-07-12 11:28:17.385771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.467 [2024-07-12 11:28:17.385816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:51.467 [2024-07-12 11:28:17.396527] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e5a90 00:23:51.467 [2024-07-12 11:28:17.398127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.467 [2024-07-12 11:28:17.398155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:51.467 [2024-07-12 11:28:17.408428] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190df118 00:23:51.467 [2024-07-12 11:28:17.409973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.467 [2024-07-12 11:28:17.410001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:51.467 [2024-07-12 11:28:17.419631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f2948 00:23:51.467 [2024-07-12 11:28:17.421276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.467 [2024-07-12 11:28:17.421305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:51.467 [2024-07-12 11:28:17.432016] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e7818 00:23:51.467 [2024-07-12 11:28:17.433873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.467 [2024-07-12 11:28:17.433902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:51.467 [2024-07-12 11:28:17.440398] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f46d0 00:23:51.467 [2024-07-12 11:28:17.441269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.467 [2024-07-12 11:28:17.441312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:51.467 [2024-07-12 11:28:17.452559] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190de038 00:23:51.467 [2024-07-12 11:28:17.453588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.467 [2024-07-12 11:28:17.453638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:51.467 [2024-07-12 11:28:17.464950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190efae0 00:23:51.467 [2024-07-12 11:28:17.465678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.467 [2024-07-12 11:28:17.465707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.467 [2024-07-12 11:28:17.475839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e1710 00:23:51.467 [2024-07-12 11:28:17.476936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.467 [2024-07-12 11:28:17.476966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:51.467 [2024-07-12 11:28:17.490214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e8088 00:23:51.468 [2024-07-12 11:28:17.491811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.468 [2024-07-12 11:28:17.491855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:51.468 [2024-07-12 11:28:17.502402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190ed920 00:23:51.468 [2024-07-12 11:28:17.504199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.468 [2024-07-12 11:28:17.504242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:51.468 [2024-07-12 11:28:17.510668] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e1710 00:23:51.468 [2024-07-12 11:28:17.511492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.468 [2024-07-12 11:28:17.511536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:51.468 [2024-07-12 11:28:17.523094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f0350 00:23:51.468 [2024-07-12 11:28:17.523995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.468 [2024-07-12 11:28:17.524024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:51.468 [2024-07-12 11:28:17.535267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190fa3a0 00:23:51.468 [2024-07-12 11:28:17.536387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.468 [2024-07-12 11:28:17.536429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:51.468 [2024-07-12 11:28:17.547470] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e5ec8 00:23:51.468 [2024-07-12 11:28:17.548715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.468 [2024-07-12 11:28:17.548759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:51.468 [2024-07-12 11:28:17.559668] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190ee5c8 00:23:51.468 [2024-07-12 11:28:17.561286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.468 [2024-07-12 11:28:17.561329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:51.468 [2024-07-12 11:28:17.568979] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190fdeb0 00:23:51.468 [2024-07-12 11:28:17.569861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:9882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.468 [2024-07-12 11:28:17.569911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:51.468 [2024-07-12 11:28:17.583090] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190efae0 00:23:51.468 [2024-07-12 11:28:17.584550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.468 [2024-07-12 11:28:17.584583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:51.468 [2024-07-12 11:28:17.591896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e5220 00:23:51.468 [2024-07-12 11:28:17.592681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.468 [2024-07-12 11:28:17.592734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:51.728 [2024-07-12 11:28:17.604897] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190ebb98 00:23:51.728 [2024-07-12 11:28:17.605853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:24447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.728 [2024-07-12 11:28:17.605888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:51.728 [2024-07-12 11:28:17.617657] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f9f68 00:23:51.728 [2024-07-12 11:28:17.618467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.728 [2024-07-12 11:28:17.618495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:51.728 [2024-07-12 11:28:17.628722] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e6fa8 00:23:51.728 [2024-07-12 11:28:17.629731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.728 [2024-07-12 11:28:17.629773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:51.728 [2024-07-12 11:28:17.641734] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190de470 00:23:51.728 [2024-07-12 11:28:17.642980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.728 [2024-07-12 11:28:17.643009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:51.728 [2024-07-12 11:28:17.653862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190df550 00:23:51.728 [2024-07-12 11:28:17.655434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.728 [2024-07-12 11:28:17.655476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:51.728 [2024-07-12 11:28:17.664720] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190ed920 00:23:51.728 [2024-07-12 11:28:17.666003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.728 [2024-07-12 11:28:17.666031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:51.729 [2024-07-12 11:28:17.676187] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190fc998 00:23:51.729 [2024-07-12 11:28:17.677245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.729 [2024-07-12 11:28:17.677287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:51.729 [2024-07-12 11:28:17.689575] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190ea680 00:23:51.729 [2024-07-12 11:28:17.691179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.729 [2024-07-12 11:28:17.691221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:51.729 [2024-07-12 11:28:17.701805] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f0ff8 00:23:51.729 [2024-07-12 11:28:17.703613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.729 [2024-07-12 11:28:17.703656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:51.729 [2024-07-12 11:28:17.710078] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190ec840 00:23:51.729 [2024-07-12 11:28:17.710923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.729 [2024-07-12 11:28:17.710950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:51.729 [2024-07-12 11:28:17.722184] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190f2510 00:23:51.729 [2024-07-12 11:28:17.722801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.729 [2024-07-12 11:28:17.722828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:51.729 [2024-07-12 11:28:17.734266] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190fe2e8 00:23:51.729 [2024-07-12 11:28:17.735020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.729 [2024-07-12 11:28:17.735048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:51.729 [2024-07-12 11:28:17.748279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e5220 00:23:51.729 [2024-07-12 11:28:17.750082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.729 [2024-07-12 11:28:17.750125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:51.729 [2024-07-12 11:28:17.758496] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e5a90 00:23:51.729 [2024-07-12 11:28:17.758687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.729 [2024-07-12 11:28:17.758736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:51.729 [2024-07-12 11:28:17.772088] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e5a90 00:23:51.729 [2024-07-12 11:28:17.772280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.729 [2024-07-12 11:28:17.772323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:51.729 [2024-07-12 11:28:17.785995] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e5a90 00:23:51.729 [2024-07-12 11:28:17.786186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.729 [2024-07-12 11:28:17.786228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:51.729 [2024-07-12 11:28:17.799961] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e5a90 00:23:51.729 [2024-07-12 11:28:17.800153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.729 [2024-07-12 11:28:17.800195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:51.729 [2024-07-12 11:28:17.813898] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e5a90 00:23:51.729 [2024-07-12 11:28:17.814088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.729 [2024-07-12 11:28:17.814115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:51.729 [2024-07-12 11:28:17.827704] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e5a90 00:23:51.729 [2024-07-12 11:28:17.827901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.729 [2024-07-12 11:28:17.827928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:51.729 [2024-07-12 11:28:17.841464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e5a90 00:23:51.729 [2024-07-12 11:28:17.841656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.729 [2024-07-12 11:28:17.841683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:51.729 [2024-07-12 11:28:17.855339] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e5a90 00:23:51.729 [2024-07-12 11:28:17.855520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:51.729 [2024-07-12 11:28:17.855547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:52.024 [2024-07-12 11:28:17.870942] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e5a90 00:23:52.025 [2024-07-12 11:28:17.871153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.025 [2024-07-12 11:28:17.871180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:52.025 [2024-07-12 11:28:17.885057] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e5a90 00:23:52.025 [2024-07-12 11:28:17.885259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.025 [2024-07-12 11:28:17.885286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:52.025 [2024-07-12 11:28:17.898945] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e5a90 00:23:52.025 [2024-07-12 11:28:17.899112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.025 [2024-07-12 11:28:17.899139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:52.025 [2024-07-12 11:28:17.912899] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e5a90 00:23:52.025 [2024-07-12 11:28:17.913077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.025 [2024-07-12 11:28:17.913103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:52.025 [2024-07-12 11:28:17.926718] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e5a90 00:23:52.025 [2024-07-12 11:28:17.926887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.025 [2024-07-12 11:28:17.926915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:52.025 [2024-07-12 11:28:17.940508] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e5a90 00:23:52.025 [2024-07-12 11:28:17.940672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.025 [2024-07-12 11:28:17.940699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:52.025 [2024-07-12 11:28:17.953700] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e5a90 00:23:52.025 [2024-07-12 11:28:17.953896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.025 [2024-07-12 11:28:17.953923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:52.025 [2024-07-12 11:28:17.967563] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e5a90 00:23:52.025 [2024-07-12 11:28:17.967754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.025 [2024-07-12 11:28:17.967795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:52.025 [2024-07-12 11:28:17.981470] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e5a90 00:23:52.025 [2024-07-12 11:28:17.981662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.025 [2024-07-12 11:28:17.981689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:52.025 [2024-07-12 11:28:17.995306] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdff6b0) with pdu=0x2000190e5a90 00:23:52.025 [2024-07-12 11:28:17.995494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:52.025 [2024-07-12 11:28:17.995535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:52.025 00:23:52.025 Latency(us) 00:23:52.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.025 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:52.025 nvme0n1 : 2.01 21462.75 83.84 0.00 0.00 5949.87 2669.99 15728.64 00:23:52.025 =================================================================================================================== 00:23:52.025 Total : 21462.75 83.84 0.00 0.00 5949.87 2669.99 15728.64 00:23:52.025 0 00:23:52.025 11:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:52.025 11:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:52.025 11:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:52.025 | .driver_specific 00:23:52.025 | .nvme_error 00:23:52.025 | .status_code 00:23:52.025 | .command_transient_transport_error' 00:23:52.025 11:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:52.283 11:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 168 > 0 )) 00:23:52.283 11:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 673837 00:23:52.283 11:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 673837 ']' 00:23:52.283 11:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 673837 00:23:52.283 11:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:23:52.283 11:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:52.284 11:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 673837 00:23:52.284 11:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:52.284 11:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:52.284 11:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 673837' 00:23:52.284 killing process with pid 673837 00:23:52.284 11:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 673837 00:23:52.284 Received shutdown signal, test time was about 2.000000 seconds 00:23:52.284 00:23:52.284 Latency(us) 00:23:52.284 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.284 =================================================================================================================== 00:23:52.284 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:52.284 11:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 673837 00:23:52.541 11:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:23:52.541 11:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:52.541 11:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:23:52.541 11:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:23:52.541 11:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:23:52.541 11:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=674234 00:23:52.541 11:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:23:52.541 11:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 674234 /var/tmp/bperf.sock 00:23:52.541 11:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 674234 ']' 00:23:52.541 11:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:52.541 11:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:52.541 11:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:52.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:52.541 11:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:52.541 11:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:52.541 [2024-07-12 11:28:18.615729] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:23:52.541 [2024-07-12 11:28:18.615813] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid674234 ] 00:23:52.541 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:52.541 Zero copy mechanism will not be used. 00:23:52.541 EAL: No free 2048 kB hugepages reported on node 1 00:23:52.541 [2024-07-12 11:28:18.672979] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.800 [2024-07-12 11:28:18.778838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.800 11:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:52.800 11:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:23:52.800 11:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:52.800 11:28:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:53.058 11:28:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:53.058 11:28:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.058 11:28:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:53.058 11:28:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.058 11:28:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:53.058 11:28:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:53.623 nvme0n1 00:23:53.623 11:28:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:53.623 11:28:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.623 11:28:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:53.623 11:28:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.623 11:28:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:53.623 11:28:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:53.623 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:53.623 Zero copy mechanism will not be used. 00:23:53.623 Running I/O for 2 seconds... 00:23:53.623 [2024-07-12 11:28:19.749522] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.623 [2024-07-12 11:28:19.749914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.623 [2024-07-12 11:28:19.749951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:53.623 [2024-07-12 11:28:19.755504] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.623 [2024-07-12 11:28:19.755803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.623 [2024-07-12 11:28:19.755832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:53.883 [2024-07-12 11:28:19.761505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.883 [2024-07-12 11:28:19.761877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.883 [2024-07-12 11:28:19.761907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:53.883 [2024-07-12 11:28:19.767826] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.883 [2024-07-12 11:28:19.768171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.883 [2024-07-12 11:28:19.768199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:53.883 [2024-07-12 11:28:19.773222] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.883 [2024-07-12 11:28:19.773548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.883 [2024-07-12 11:28:19.773575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:53.883 [2024-07-12 11:28:19.778457] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.883 [2024-07-12 11:28:19.778751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.883 [2024-07-12 11:28:19.778779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:53.883 [2024-07-12 11:28:19.783607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.883 [2024-07-12 11:28:19.783908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.883 [2024-07-12 11:28:19.783937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:53.883 [2024-07-12 11:28:19.788802] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.883 [2024-07-12 11:28:19.789091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.883 [2024-07-12 11:28:19.789120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:53.883 [2024-07-12 11:28:19.793945] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.883 [2024-07-12 11:28:19.794250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.883 [2024-07-12 11:28:19.794276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:53.883 [2024-07-12 11:28:19.799308] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.883 [2024-07-12 11:28:19.799620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.883 [2024-07-12 11:28:19.799653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:53.883 [2024-07-12 11:28:19.804487] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.883 [2024-07-12 11:28:19.804812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.883 [2024-07-12 11:28:19.804840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:53.883 [2024-07-12 11:28:19.810183] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.883 [2024-07-12 11:28:19.810493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.883 [2024-07-12 11:28:19.810521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:53.883 [2024-07-12 11:28:19.816664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.883 [2024-07-12 11:28:19.817036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.883 [2024-07-12 11:28:19.817065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:53.883 [2024-07-12 11:28:19.822557] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.883 [2024-07-12 11:28:19.822891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.883 [2024-07-12 11:28:19.822919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:53.883 [2024-07-12 11:28:19.828595] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.883 [2024-07-12 11:28:19.828941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.883 [2024-07-12 11:28:19.828969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:53.883 [2024-07-12 11:28:19.834155] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.883 [2024-07-12 11:28:19.834491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.883 [2024-07-12 11:28:19.834519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:53.883 [2024-07-12 11:28:19.840826] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.883 [2024-07-12 11:28:19.841204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.883 [2024-07-12 11:28:19.841247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:53.883 [2024-07-12 11:28:19.847129] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.883 [2024-07-12 11:28:19.847427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.883 [2024-07-12 11:28:19.847465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:53.883 [2024-07-12 11:28:19.852460] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.883 [2024-07-12 11:28:19.852760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.883 [2024-07-12 11:28:19.852788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:53.883 [2024-07-12 11:28:19.858517] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.883 [2024-07-12 11:28:19.858855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.883 [2024-07-12 11:28:19.858892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:53.883 [2024-07-12 11:28:19.864848] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.883 [2024-07-12 11:28:19.865197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.883 [2024-07-12 11:28:19.865224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:53.883 [2024-07-12 11:28:19.872311] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.883 [2024-07-12 11:28:19.872604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.883 [2024-07-12 11:28:19.872632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:53.883 [2024-07-12 11:28:19.879375] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.883 [2024-07-12 11:28:19.879673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.883 [2024-07-12 11:28:19.879701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:53.883 [2024-07-12 11:28:19.886466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.883 [2024-07-12 11:28:19.886830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.883 [2024-07-12 11:28:19.886873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:53.883 [2024-07-12 11:28:19.893442] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.884 [2024-07-12 11:28:19.893743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.884 [2024-07-12 11:28:19.893771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:53.884 [2024-07-12 11:28:19.899909] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.884 [2024-07-12 11:28:19.900191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.884 [2024-07-12 11:28:19.900219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:53.884 [2024-07-12 11:28:19.906350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.884 [2024-07-12 11:28:19.906669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.884 [2024-07-12 11:28:19.906697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:53.884 [2024-07-12 11:28:19.911352] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.884 [2024-07-12 11:28:19.911692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.884 [2024-07-12 11:28:19.911720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:53.884 [2024-07-12 11:28:19.916325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.884 [2024-07-12 11:28:19.916619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.884 [2024-07-12 11:28:19.916647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:53.884 [2024-07-12 11:28:19.921144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.884 [2024-07-12 11:28:19.921456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.884 [2024-07-12 11:28:19.921485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:53.884 [2024-07-12 11:28:19.926089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.884 [2024-07-12 11:28:19.926385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.884 [2024-07-12 11:28:19.926413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:53.884 [2024-07-12 11:28:19.930978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.884 [2024-07-12 11:28:19.931271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.884 [2024-07-12 11:28:19.931299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:53.884 [2024-07-12 11:28:19.936509] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.884 [2024-07-12 11:28:19.936801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.884 [2024-07-12 11:28:19.936829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:53.884 [2024-07-12 11:28:19.943005] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.884 [2024-07-12 11:28:19.943332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.884 [2024-07-12 11:28:19.943361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:53.884 [2024-07-12 11:28:19.949511] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.884 [2024-07-12 11:28:19.949862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.884 [2024-07-12 11:28:19.949895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:53.884 [2024-07-12 11:28:19.956570] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.884 [2024-07-12 11:28:19.956897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.884 [2024-07-12 11:28:19.956929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:53.884 [2024-07-12 11:28:19.963112] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.884 [2024-07-12 11:28:19.963434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.884 [2024-07-12 11:28:19.963461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:53.884 [2024-07-12 11:28:19.969379] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.884 [2024-07-12 11:28:19.969675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.884 [2024-07-12 11:28:19.969703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:53.884 [2024-07-12 11:28:19.974248] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.884 [2024-07-12 11:28:19.974540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.884 [2024-07-12 11:28:19.974568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:53.884 [2024-07-12 11:28:19.979171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.884 [2024-07-12 11:28:19.979463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.884 [2024-07-12 11:28:19.979490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:53.884 [2024-07-12 11:28:19.984633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.884 [2024-07-12 11:28:19.984936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.884 [2024-07-12 11:28:19.984964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:53.884 [2024-07-12 11:28:19.989669] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.884 [2024-07-12 11:28:19.989989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.884 [2024-07-12 11:28:19.990018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:53.884 [2024-07-12 11:28:19.994941] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.884 [2024-07-12 11:28:19.995237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.884 [2024-07-12 11:28:19.995265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:53.884 [2024-07-12 11:28:20.001375] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.884 [2024-07-12 11:28:20.001693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.884 [2024-07-12 11:28:20.001723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:53.884 [2024-07-12 11:28:20.007107] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.884 [2024-07-12 11:28:20.007460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.884 [2024-07-12 11:28:20.007499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:53.884 [2024-07-12 11:28:20.013333] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:53.884 [2024-07-12 11:28:20.013642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.884 [2024-07-12 11:28:20.013675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.143 [2024-07-12 11:28:20.019495] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.143 [2024-07-12 11:28:20.019791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.143 [2024-07-12 11:28:20.019835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.143 [2024-07-12 11:28:20.026266] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.143 [2024-07-12 11:28:20.026554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.143 [2024-07-12 11:28:20.026583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.143 [2024-07-12 11:28:20.033452] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.143 [2024-07-12 11:28:20.033739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.143 [2024-07-12 11:28:20.033783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.143 [2024-07-12 11:28:20.040017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.143 [2024-07-12 11:28:20.040324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.143 [2024-07-12 11:28:20.040351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.143 [2024-07-12 11:28:20.045103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.143 [2024-07-12 11:28:20.045387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.143 [2024-07-12 11:28:20.045416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.143 [2024-07-12 11:28:20.049924] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.143 [2024-07-12 11:28:20.050260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.143 [2024-07-12 11:28:20.050288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.143 [2024-07-12 11:28:20.054937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.143 [2024-07-12 11:28:20.055233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.143 [2024-07-12 11:28:20.055261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.143 [2024-07-12 11:28:20.059846] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.143 [2024-07-12 11:28:20.060142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.143 [2024-07-12 11:28:20.060171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.143 [2024-07-12 11:28:20.064933] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.143 [2024-07-12 11:28:20.065216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.143 [2024-07-12 11:28:20.065244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.143 [2024-07-12 11:28:20.071189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.143 [2024-07-12 11:28:20.071489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.143 [2024-07-12 11:28:20.071517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.143 [2024-07-12 11:28:20.077609] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.144 [2024-07-12 11:28:20.077956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.144 [2024-07-12 11:28:20.077985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.144 [2024-07-12 11:28:20.084795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.144 [2024-07-12 11:28:20.085115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.144 [2024-07-12 11:28:20.085144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.144 [2024-07-12 11:28:20.091826] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.144 [2024-07-12 11:28:20.092136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.144 [2024-07-12 11:28:20.092165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.144 [2024-07-12 11:28:20.099560] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.144 [2024-07-12 11:28:20.099846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.144 [2024-07-12 11:28:20.099906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.144 [2024-07-12 11:28:20.106600] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.144 [2024-07-12 11:28:20.106920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.144 [2024-07-12 11:28:20.106949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.144 [2024-07-12 11:28:20.112765] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.144 [2024-07-12 11:28:20.113058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.144 [2024-07-12 11:28:20.113092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.144 [2024-07-12 11:28:20.118133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.144 [2024-07-12 11:28:20.118439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.144 [2024-07-12 11:28:20.118468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.144 [2024-07-12 11:28:20.123030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.144 [2024-07-12 11:28:20.123323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.144 [2024-07-12 11:28:20.123351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.144 [2024-07-12 11:28:20.127948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.144 [2024-07-12 11:28:20.128245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.144 [2024-07-12 11:28:20.128272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.144 [2024-07-12 11:28:20.132910] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.144 [2024-07-12 11:28:20.133193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.144 [2024-07-12 11:28:20.133220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.144 [2024-07-12 11:28:20.137878] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.144 [2024-07-12 11:28:20.138161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.144 [2024-07-12 11:28:20.138189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.144 [2024-07-12 11:28:20.143100] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.144 [2024-07-12 11:28:20.143428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.144 [2024-07-12 11:28:20.143455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.144 [2024-07-12 11:28:20.148647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.144 [2024-07-12 11:28:20.148968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.144 [2024-07-12 11:28:20.148996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.144 [2024-07-12 11:28:20.154285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.144 [2024-07-12 11:28:20.154605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.144 [2024-07-12 11:28:20.154633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.144 [2024-07-12 11:28:20.159526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.144 [2024-07-12 11:28:20.159893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.144 [2024-07-12 11:28:20.159932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.144 [2024-07-12 11:28:20.165067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.144 [2024-07-12 11:28:20.165375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.144 [2024-07-12 11:28:20.165402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.144 [2024-07-12 11:28:20.170569] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.144 [2024-07-12 11:28:20.170918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.144 [2024-07-12 11:28:20.170946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.144 [2024-07-12 11:28:20.176303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.144 [2024-07-12 11:28:20.176603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.144 [2024-07-12 11:28:20.176630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.144 [2024-07-12 11:28:20.182321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.144 [2024-07-12 11:28:20.182625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.144 [2024-07-12 11:28:20.182651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.144 [2024-07-12 11:28:20.188000] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.144 [2024-07-12 11:28:20.188311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.144 [2024-07-12 11:28:20.188338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.144 [2024-07-12 11:28:20.193829] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.144 [2024-07-12 11:28:20.194131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.144 [2024-07-12 11:28:20.194158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.144 [2024-07-12 11:28:20.199713] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.144 [2024-07-12 11:28:20.200006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.144 [2024-07-12 11:28:20.200034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.144 [2024-07-12 11:28:20.204680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.144 [2024-07-12 11:28:20.204972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.144 [2024-07-12 11:28:20.205006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.144 [2024-07-12 11:28:20.209501] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.144 [2024-07-12 11:28:20.209783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.144 [2024-07-12 11:28:20.209811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.144 [2024-07-12 11:28:20.214428] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.144 [2024-07-12 11:28:20.214720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.144 [2024-07-12 11:28:20.214748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.144 [2024-07-12 11:28:20.220141] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.144 [2024-07-12 11:28:20.220446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.144 [2024-07-12 11:28:20.220473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.144 [2024-07-12 11:28:20.226613] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.144 [2024-07-12 11:28:20.226948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.144 [2024-07-12 11:28:20.226976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.144 [2024-07-12 11:28:20.233066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.144 [2024-07-12 11:28:20.233362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.144 [2024-07-12 11:28:20.233389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.144 [2024-07-12 11:28:20.239417] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.144 [2024-07-12 11:28:20.239727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.144 [2024-07-12 11:28:20.239754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.144 [2024-07-12 11:28:20.245795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.144 [2024-07-12 11:28:20.246139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.145 [2024-07-12 11:28:20.246168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.145 [2024-07-12 11:28:20.252153] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.145 [2024-07-12 11:28:20.252449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.145 [2024-07-12 11:28:20.252476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.145 [2024-07-12 11:28:20.258503] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.145 [2024-07-12 11:28:20.258807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.145 [2024-07-12 11:28:20.258834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.145 [2024-07-12 11:28:20.265654] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.145 [2024-07-12 11:28:20.265990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.145 [2024-07-12 11:28:20.266018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.145 [2024-07-12 11:28:20.271631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.145 [2024-07-12 11:28:20.271988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.145 [2024-07-12 11:28:20.272017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.403 [2024-07-12 11:28:20.277213] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.403 [2024-07-12 11:28:20.277540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.403 [2024-07-12 11:28:20.277569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.403 [2024-07-12 11:28:20.282408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.403 [2024-07-12 11:28:20.282523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.403 [2024-07-12 11:28:20.282551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.403 [2024-07-12 11:28:20.288971] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.403 [2024-07-12 11:28:20.289281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.403 [2024-07-12 11:28:20.289310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.403 [2024-07-12 11:28:20.294454] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.403 [2024-07-12 11:28:20.294755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.403 [2024-07-12 11:28:20.294781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.403 [2024-07-12 11:28:20.299514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.403 [2024-07-12 11:28:20.299831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.403 [2024-07-12 11:28:20.299858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.403 [2024-07-12 11:28:20.304564] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.403 [2024-07-12 11:28:20.304863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.403 [2024-07-12 11:28:20.304896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.403 [2024-07-12 11:28:20.309372] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.403 [2024-07-12 11:28:20.309689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.403 [2024-07-12 11:28:20.309717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.403 [2024-07-12 11:28:20.314284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.403 [2024-07-12 11:28:20.314604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.403 [2024-07-12 11:28:20.314631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.403 [2024-07-12 11:28:20.319296] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.403 [2024-07-12 11:28:20.319610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.403 [2024-07-12 11:28:20.319637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.403 [2024-07-12 11:28:20.324260] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.403 [2024-07-12 11:28:20.324565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.403 [2024-07-12 11:28:20.324592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.403 [2024-07-12 11:28:20.329196] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.403 [2024-07-12 11:28:20.329507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.403 [2024-07-12 11:28:20.329535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.403 [2024-07-12 11:28:20.334210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.403 [2024-07-12 11:28:20.334532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.403 [2024-07-12 11:28:20.334559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.403 [2024-07-12 11:28:20.339147] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.403 [2024-07-12 11:28:20.339439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.403 [2024-07-12 11:28:20.339466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.403 [2024-07-12 11:28:20.344014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.403 [2024-07-12 11:28:20.344325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.403 [2024-07-12 11:28:20.344354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.403 [2024-07-12 11:28:20.348987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.403 [2024-07-12 11:28:20.349279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.403 [2024-07-12 11:28:20.349311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.403 [2024-07-12 11:28:20.353822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.403 [2024-07-12 11:28:20.354123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.403 [2024-07-12 11:28:20.354152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.403 [2024-07-12 11:28:20.359238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.403 [2024-07-12 11:28:20.359529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.403 [2024-07-12 11:28:20.359573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.403 [2024-07-12 11:28:20.365496] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.403 [2024-07-12 11:28:20.365804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.403 [2024-07-12 11:28:20.365830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.403 [2024-07-12 11:28:20.371854] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.403 [2024-07-12 11:28:20.372157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.403 [2024-07-12 11:28:20.372185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.403 [2024-07-12 11:28:20.378415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.403 [2024-07-12 11:28:20.378740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.403 [2024-07-12 11:28:20.378768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.403 [2024-07-12 11:28:20.383444] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.403 [2024-07-12 11:28:20.383757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.403 [2024-07-12 11:28:20.383786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.403 [2024-07-12 11:28:20.388540] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.403 [2024-07-12 11:28:20.388862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.403 [2024-07-12 11:28:20.388896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.403 [2024-07-12 11:28:20.393443] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.403 [2024-07-12 11:28:20.393738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.403 [2024-07-12 11:28:20.393765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.403 [2024-07-12 11:28:20.399368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.403 [2024-07-12 11:28:20.399798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.403 [2024-07-12 11:28:20.399839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.403 [2024-07-12 11:28:20.404635] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.403 [2024-07-12 11:28:20.404971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.403 [2024-07-12 11:28:20.405000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.403 [2024-07-12 11:28:20.409699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.404 [2024-07-12 11:28:20.409991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.404 [2024-07-12 11:28:20.410020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.404 [2024-07-12 11:28:20.414572] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.404 [2024-07-12 11:28:20.414863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.404 [2024-07-12 11:28:20.414898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.404 [2024-07-12 11:28:20.419511] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.404 [2024-07-12 11:28:20.419832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.404 [2024-07-12 11:28:20.419860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.404 [2024-07-12 11:28:20.425971] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.404 [2024-07-12 11:28:20.426287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.404 [2024-07-12 11:28:20.426315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.404 [2024-07-12 11:28:20.433315] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.404 [2024-07-12 11:28:20.433653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.404 [2024-07-12 11:28:20.433681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.404 [2024-07-12 11:28:20.439781] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.404 [2024-07-12 11:28:20.440083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.404 [2024-07-12 11:28:20.440111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.404 [2024-07-12 11:28:20.446474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.404 [2024-07-12 11:28:20.446762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.404 [2024-07-12 11:28:20.446791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.404 [2024-07-12 11:28:20.453020] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.404 [2024-07-12 11:28:20.453304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.404 [2024-07-12 11:28:20.453331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.404 [2024-07-12 11:28:20.459362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.404 [2024-07-12 11:28:20.459646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.404 [2024-07-12 11:28:20.459674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.404 [2024-07-12 11:28:20.465748] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.404 [2024-07-12 11:28:20.466068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.404 [2024-07-12 11:28:20.466097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.404 [2024-07-12 11:28:20.472137] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.404 [2024-07-12 11:28:20.472416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.404 [2024-07-12 11:28:20.472443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.404 [2024-07-12 11:28:20.479215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.404 [2024-07-12 11:28:20.479512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.404 [2024-07-12 11:28:20.479538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.404 [2024-07-12 11:28:20.486158] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.404 [2024-07-12 11:28:20.486466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.404 [2024-07-12 11:28:20.486493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.404 [2024-07-12 11:28:20.493636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.404 [2024-07-12 11:28:20.493954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.404 [2024-07-12 11:28:20.493981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.404 [2024-07-12 11:28:20.500896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.404 [2024-07-12 11:28:20.501069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.404 [2024-07-12 11:28:20.501097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.404 [2024-07-12 11:28:20.507621] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.404 [2024-07-12 11:28:20.507915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.404 [2024-07-12 11:28:20.507949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.404 [2024-07-12 11:28:20.513579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.404 [2024-07-12 11:28:20.513913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.404 [2024-07-12 11:28:20.513940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.404 [2024-07-12 11:28:20.519789] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.404 [2024-07-12 11:28:20.520107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.404 [2024-07-12 11:28:20.520136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.404 [2024-07-12 11:28:20.526383] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.404 [2024-07-12 11:28:20.526679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.404 [2024-07-12 11:28:20.526708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.404 [2024-07-12 11:28:20.532903] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.404 [2024-07-12 11:28:20.533218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.404 [2024-07-12 11:28:20.533261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.663 [2024-07-12 11:28:20.539428] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.663 [2024-07-12 11:28:20.539738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.663 [2024-07-12 11:28:20.539765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.663 [2024-07-12 11:28:20.546397] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.663 [2024-07-12 11:28:20.546695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.663 [2024-07-12 11:28:20.546722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.663 [2024-07-12 11:28:20.553638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.663 [2024-07-12 11:28:20.553941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.663 [2024-07-12 11:28:20.553969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.663 [2024-07-12 11:28:20.561218] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.663 [2024-07-12 11:28:20.561519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.663 [2024-07-12 11:28:20.561545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.663 [2024-07-12 11:28:20.567702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.663 [2024-07-12 11:28:20.568000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.663 [2024-07-12 11:28:20.568029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.663 [2024-07-12 11:28:20.572656] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.663 [2024-07-12 11:28:20.572946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.663 [2024-07-12 11:28:20.572975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.663 [2024-07-12 11:28:20.577545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.663 [2024-07-12 11:28:20.577839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.663 [2024-07-12 11:28:20.577875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.663 [2024-07-12 11:28:20.582724] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.663 [2024-07-12 11:28:20.583240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.663 [2024-07-12 11:28:20.583269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.663 [2024-07-12 11:28:20.587907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.663 [2024-07-12 11:28:20.588199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.663 [2024-07-12 11:28:20.588226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.663 [2024-07-12 11:28:20.593135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.663 [2024-07-12 11:28:20.593425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.663 [2024-07-12 11:28:20.593453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.663 [2024-07-12 11:28:20.599575] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.663 [2024-07-12 11:28:20.599872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.663 [2024-07-12 11:28:20.599899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.663 [2024-07-12 11:28:20.604583] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.663 [2024-07-12 11:28:20.604934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.663 [2024-07-12 11:28:20.604962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.663 [2024-07-12 11:28:20.609793] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.663 [2024-07-12 11:28:20.610092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.663 [2024-07-12 11:28:20.610121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.663 [2024-07-12 11:28:20.615491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.663 [2024-07-12 11:28:20.615813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.663 [2024-07-12 11:28:20.615840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.663 [2024-07-12 11:28:20.621855] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.663 [2024-07-12 11:28:20.622179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.663 [2024-07-12 11:28:20.622205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.663 [2024-07-12 11:28:20.628246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.663 [2024-07-12 11:28:20.628554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.663 [2024-07-12 11:28:20.628581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.663 [2024-07-12 11:28:20.633338] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.663 [2024-07-12 11:28:20.633647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.663 [2024-07-12 11:28:20.633675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.663 [2024-07-12 11:28:20.638279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.663 [2024-07-12 11:28:20.638607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.663 [2024-07-12 11:28:20.638636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.663 [2024-07-12 11:28:20.643269] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.663 [2024-07-12 11:28:20.643632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.663 [2024-07-12 11:28:20.643658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.663 [2024-07-12 11:28:20.648293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.663 [2024-07-12 11:28:20.648561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.663 [2024-07-12 11:28:20.648590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.663 [2024-07-12 11:28:20.653086] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.663 [2024-07-12 11:28:20.653388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.663 [2024-07-12 11:28:20.653414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.663 [2024-07-12 11:28:20.658016] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.664 [2024-07-12 11:28:20.658296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.664 [2024-07-12 11:28:20.658330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.664 [2024-07-12 11:28:20.662949] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.664 [2024-07-12 11:28:20.663260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.664 [2024-07-12 11:28:20.663288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.664 [2024-07-12 11:28:20.667878] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.664 [2024-07-12 11:28:20.668199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.664 [2024-07-12 11:28:20.668226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.664 [2024-07-12 11:28:20.672937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.664 [2024-07-12 11:28:20.673228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.664 [2024-07-12 11:28:20.673255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.664 [2024-07-12 11:28:20.677919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.664 [2024-07-12 11:28:20.678213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.664 [2024-07-12 11:28:20.678240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.664 [2024-07-12 11:28:20.682893] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.664 [2024-07-12 11:28:20.683204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.664 [2024-07-12 11:28:20.683246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.664 [2024-07-12 11:28:20.687767] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.664 [2024-07-12 11:28:20.688090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.664 [2024-07-12 11:28:20.688118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.664 [2024-07-12 11:28:20.692692] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.664 [2024-07-12 11:28:20.692991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.664 [2024-07-12 11:28:20.693018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.664 [2024-07-12 11:28:20.697732] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.664 [2024-07-12 11:28:20.698058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.664 [2024-07-12 11:28:20.698085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.664 [2024-07-12 11:28:20.702736] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.664 [2024-07-12 11:28:20.703060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.664 [2024-07-12 11:28:20.703089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.664 [2024-07-12 11:28:20.707695] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.664 [2024-07-12 11:28:20.708039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.664 [2024-07-12 11:28:20.708068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.664 [2024-07-12 11:28:20.712818] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.664 [2024-07-12 11:28:20.713136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.664 [2024-07-12 11:28:20.713163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.664 [2024-07-12 11:28:20.717846] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.664 [2024-07-12 11:28:20.718158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.664 [2024-07-12 11:28:20.718185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.664 [2024-07-12 11:28:20.722760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.664 [2024-07-12 11:28:20.723069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.664 [2024-07-12 11:28:20.723096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.664 [2024-07-12 11:28:20.727640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.664 [2024-07-12 11:28:20.727933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.664 [2024-07-12 11:28:20.727962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.664 [2024-07-12 11:28:20.732621] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.664 [2024-07-12 11:28:20.732915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.664 [2024-07-12 11:28:20.732944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.664 [2024-07-12 11:28:20.737432] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.664 [2024-07-12 11:28:20.737723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.664 [2024-07-12 11:28:20.737750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.664 [2024-07-12 11:28:20.742470] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.664 [2024-07-12 11:28:20.742772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.664 [2024-07-12 11:28:20.742798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.664 [2024-07-12 11:28:20.747343] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.664 [2024-07-12 11:28:20.747636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.664 [2024-07-12 11:28:20.747663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.664 [2024-07-12 11:28:20.752380] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.664 [2024-07-12 11:28:20.752676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.664 [2024-07-12 11:28:20.752702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.664 [2024-07-12 11:28:20.757167] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.664 [2024-07-12 11:28:20.757450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.664 [2024-07-12 11:28:20.757479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.664 [2024-07-12 11:28:20.762110] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.664 [2024-07-12 11:28:20.762396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.664 [2024-07-12 11:28:20.762422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.664 [2024-07-12 11:28:20.767157] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.664 [2024-07-12 11:28:20.767462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.664 [2024-07-12 11:28:20.767488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.664 [2024-07-12 11:28:20.772156] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.664 [2024-07-12 11:28:20.772492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.664 [2024-07-12 11:28:20.772519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.664 [2024-07-12 11:28:20.777036] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.664 [2024-07-12 11:28:20.777354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.664 [2024-07-12 11:28:20.777381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.664 [2024-07-12 11:28:20.781985] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.664 [2024-07-12 11:28:20.782300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.664 [2024-07-12 11:28:20.782327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.664 [2024-07-12 11:28:20.786844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.664 [2024-07-12 11:28:20.787136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.664 [2024-07-12 11:28:20.787170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.664 [2024-07-12 11:28:20.791782] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.664 [2024-07-12 11:28:20.792107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.664 [2024-07-12 11:28:20.792150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.922 [2024-07-12 11:28:20.797520] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.922 [2024-07-12 11:28:20.797717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.922 [2024-07-12 11:28:20.797745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.922 [2024-07-12 11:28:20.803440] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.923 [2024-07-12 11:28:20.803789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.923 [2024-07-12 11:28:20.803815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.923 [2024-07-12 11:28:20.809429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.923 [2024-07-12 11:28:20.809738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.923 [2024-07-12 11:28:20.809764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.923 [2024-07-12 11:28:20.814324] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.923 [2024-07-12 11:28:20.814646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.923 [2024-07-12 11:28:20.814673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.923 [2024-07-12 11:28:20.819372] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.923 [2024-07-12 11:28:20.819652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.923 [2024-07-12 11:28:20.819680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.923 [2024-07-12 11:28:20.824258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.923 [2024-07-12 11:28:20.824570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.923 [2024-07-12 11:28:20.824597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.923 [2024-07-12 11:28:20.829149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.923 [2024-07-12 11:28:20.829429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.923 [2024-07-12 11:28:20.829458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.923 [2024-07-12 11:28:20.833943] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.923 [2024-07-12 11:28:20.834259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.923 [2024-07-12 11:28:20.834288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.923 [2024-07-12 11:28:20.838844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.923 [2024-07-12 11:28:20.839149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.923 [2024-07-12 11:28:20.839176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.923 [2024-07-12 11:28:20.843829] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.923 [2024-07-12 11:28:20.844119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.923 [2024-07-12 11:28:20.844148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.923 [2024-07-12 11:28:20.848907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.923 [2024-07-12 11:28:20.849207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.923 [2024-07-12 11:28:20.849234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.923 [2024-07-12 11:28:20.854079] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.923 [2024-07-12 11:28:20.854389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.923 [2024-07-12 11:28:20.854414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.923 [2024-07-12 11:28:20.859647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.923 [2024-07-12 11:28:20.859949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.923 [2024-07-12 11:28:20.859975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.923 [2024-07-12 11:28:20.864682] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.923 [2024-07-12 11:28:20.865001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.923 [2024-07-12 11:28:20.865029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.923 [2024-07-12 11:28:20.869606] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.923 [2024-07-12 11:28:20.869926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.923 [2024-07-12 11:28:20.869955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.923 [2024-07-12 11:28:20.874489] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.923 [2024-07-12 11:28:20.874848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.923 [2024-07-12 11:28:20.874883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.923 [2024-07-12 11:28:20.879471] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.923 [2024-07-12 11:28:20.879812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.923 [2024-07-12 11:28:20.879839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.923 [2024-07-12 11:28:20.884417] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.923 [2024-07-12 11:28:20.884699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.923 [2024-07-12 11:28:20.884727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.923 [2024-07-12 11:28:20.889224] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.923 [2024-07-12 11:28:20.889533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.923 [2024-07-12 11:28:20.889560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.923 [2024-07-12 11:28:20.894102] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.923 [2024-07-12 11:28:20.894415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.923 [2024-07-12 11:28:20.894443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.923 [2024-07-12 11:28:20.899431] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.923 [2024-07-12 11:28:20.899721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.923 [2024-07-12 11:28:20.899748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.923 [2024-07-12 11:28:20.905933] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.923 [2024-07-12 11:28:20.906242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.923 [2024-07-12 11:28:20.906269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.923 [2024-07-12 11:28:20.912643] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.923 [2024-07-12 11:28:20.912968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.923 [2024-07-12 11:28:20.912995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.923 [2024-07-12 11:28:20.920256] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.923 [2024-07-12 11:28:20.920558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.923 [2024-07-12 11:28:20.920585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.923 [2024-07-12 11:28:20.927777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.923 [2024-07-12 11:28:20.928071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.923 [2024-07-12 11:28:20.928105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.923 [2024-07-12 11:28:20.935388] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.923 [2024-07-12 11:28:20.935685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.923 [2024-07-12 11:28:20.935714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.923 [2024-07-12 11:28:20.942607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.923 [2024-07-12 11:28:20.942820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.923 [2024-07-12 11:28:20.942848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.923 [2024-07-12 11:28:20.949675] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.923 [2024-07-12 11:28:20.950025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.923 [2024-07-12 11:28:20.950052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.923 [2024-07-12 11:28:20.956152] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.923 [2024-07-12 11:28:20.956453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.923 [2024-07-12 11:28:20.956481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.923 [2024-07-12 11:28:20.962589] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.923 [2024-07-12 11:28:20.962904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.923 [2024-07-12 11:28:20.962931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.923 [2024-07-12 11:28:20.969197] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.924 [2024-07-12 11:28:20.969481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.924 [2024-07-12 11:28:20.969510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.924 [2024-07-12 11:28:20.975707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.924 [2024-07-12 11:28:20.976014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.924 [2024-07-12 11:28:20.976042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.924 [2024-07-12 11:28:20.982198] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.924 [2024-07-12 11:28:20.982498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.924 [2024-07-12 11:28:20.982526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.924 [2024-07-12 11:28:20.988626] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.924 [2024-07-12 11:28:20.988975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.924 [2024-07-12 11:28:20.989004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.924 [2024-07-12 11:28:20.995779] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.924 [2024-07-12 11:28:20.996080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.924 [2024-07-12 11:28:20.996109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.924 [2024-07-12 11:28:21.003167] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.924 [2024-07-12 11:28:21.003472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.924 [2024-07-12 11:28:21.003500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.924 [2024-07-12 11:28:21.010164] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.924 [2024-07-12 11:28:21.010473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.924 [2024-07-12 11:28:21.010509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.924 [2024-07-12 11:28:21.015740] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.924 [2024-07-12 11:28:21.016096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.924 [2024-07-12 11:28:21.016123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.924 [2024-07-12 11:28:21.022308] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.924 [2024-07-12 11:28:21.022612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.924 [2024-07-12 11:28:21.022640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.924 [2024-07-12 11:28:21.028435] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.924 [2024-07-12 11:28:21.028773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.924 [2024-07-12 11:28:21.028801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.924 [2024-07-12 11:28:21.035555] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.924 [2024-07-12 11:28:21.035871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.924 [2024-07-12 11:28:21.035900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.924 [2024-07-12 11:28:21.041975] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.924 [2024-07-12 11:28:21.042286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.924 [2024-07-12 11:28:21.042315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.924 [2024-07-12 11:28:21.048409] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.924 [2024-07-12 11:28:21.048732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.924 [2024-07-12 11:28:21.048758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.924 [2024-07-12 11:28:21.054110] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:54.924 [2024-07-12 11:28:21.054409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.924 [2024-07-12 11:28:21.054437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.183 [2024-07-12 11:28:21.060670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.183 [2024-07-12 11:28:21.060977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.183 [2024-07-12 11:28:21.061006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.183 [2024-07-12 11:28:21.067162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.184 [2024-07-12 11:28:21.067468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.184 [2024-07-12 11:28:21.067495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.184 [2024-07-12 11:28:21.074200] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.184 [2024-07-12 11:28:21.074524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.184 [2024-07-12 11:28:21.074551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.184 [2024-07-12 11:28:21.081644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.184 [2024-07-12 11:28:21.081965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.184 [2024-07-12 11:28:21.081993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.184 [2024-07-12 11:28:21.087789] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.184 [2024-07-12 11:28:21.088105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.184 [2024-07-12 11:28:21.088134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.184 [2024-07-12 11:28:21.092835] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.184 [2024-07-12 11:28:21.093170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.184 [2024-07-12 11:28:21.093197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.184 [2024-07-12 11:28:21.098539] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.184 [2024-07-12 11:28:21.098847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.184 [2024-07-12 11:28:21.098886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.184 [2024-07-12 11:28:21.103524] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.184 [2024-07-12 11:28:21.103843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.184 [2024-07-12 11:28:21.103894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.184 [2024-07-12 11:28:21.108505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.184 [2024-07-12 11:28:21.108795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.184 [2024-07-12 11:28:21.108823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.184 [2024-07-12 11:28:21.113490] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.184 [2024-07-12 11:28:21.113802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.184 [2024-07-12 11:28:21.113828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.184 [2024-07-12 11:28:21.118553] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.184 [2024-07-12 11:28:21.118835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.184 [2024-07-12 11:28:21.118884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.184 [2024-07-12 11:28:21.123627] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.184 [2024-07-12 11:28:21.123943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.184 [2024-07-12 11:28:21.123971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.184 [2024-07-12 11:28:21.128562] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.184 [2024-07-12 11:28:21.128908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.184 [2024-07-12 11:28:21.128936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.184 [2024-07-12 11:28:21.133562] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.184 [2024-07-12 11:28:21.133880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.184 [2024-07-12 11:28:21.133926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.184 [2024-07-12 11:28:21.138639] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.184 [2024-07-12 11:28:21.138930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.184 [2024-07-12 11:28:21.138958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.184 [2024-07-12 11:28:21.143825] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.184 [2024-07-12 11:28:21.144123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.184 [2024-07-12 11:28:21.144151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.184 [2024-07-12 11:28:21.149501] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.184 [2024-07-12 11:28:21.149799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.184 [2024-07-12 11:28:21.149827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.184 [2024-07-12 11:28:21.155736] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.184 [2024-07-12 11:28:21.156031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.184 [2024-07-12 11:28:21.156059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.184 [2024-07-12 11:28:21.160709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.184 [2024-07-12 11:28:21.161004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.184 [2024-07-12 11:28:21.161033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.184 [2024-07-12 11:28:21.165643] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.184 [2024-07-12 11:28:21.165945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.184 [2024-07-12 11:28:21.165974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.184 [2024-07-12 11:28:21.170641] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.184 [2024-07-12 11:28:21.170932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.184 [2024-07-12 11:28:21.170960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.184 [2024-07-12 11:28:21.176029] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.184 [2024-07-12 11:28:21.176325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.184 [2024-07-12 11:28:21.176352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.184 [2024-07-12 11:28:21.181045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.184 [2024-07-12 11:28:21.181347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.184 [2024-07-12 11:28:21.181375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.184 [2024-07-12 11:28:21.186859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.184 [2024-07-12 11:28:21.187162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.184 [2024-07-12 11:28:21.187191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.184 [2024-07-12 11:28:21.193227] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.184 [2024-07-12 11:28:21.193542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.184 [2024-07-12 11:28:21.193570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.184 [2024-07-12 11:28:21.199623] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.184 [2024-07-12 11:28:21.199943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.184 [2024-07-12 11:28:21.199972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.184 [2024-07-12 11:28:21.206472] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.184 [2024-07-12 11:28:21.206767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.184 [2024-07-12 11:28:21.206794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.184 [2024-07-12 11:28:21.213748] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.184 [2024-07-12 11:28:21.214028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.184 [2024-07-12 11:28:21.214057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.184 [2024-07-12 11:28:21.220145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.184 [2024-07-12 11:28:21.220427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.184 [2024-07-12 11:28:21.220455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.184 [2024-07-12 11:28:21.226134] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.184 [2024-07-12 11:28:21.226417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.184 [2024-07-12 11:28:21.226446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.184 [2024-07-12 11:28:21.231457] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.185 [2024-07-12 11:28:21.231739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.185 [2024-07-12 11:28:21.231767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.185 [2024-07-12 11:28:21.236436] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.185 [2024-07-12 11:28:21.236527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.185 [2024-07-12 11:28:21.236554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.185 [2024-07-12 11:28:21.242261] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.185 [2024-07-12 11:28:21.242531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.185 [2024-07-12 11:28:21.242567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.185 [2024-07-12 11:28:21.247973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.185 [2024-07-12 11:28:21.248247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.185 [2024-07-12 11:28:21.248277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.185 [2024-07-12 11:28:21.254656] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.185 [2024-07-12 11:28:21.254984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.185 [2024-07-12 11:28:21.255013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.185 [2024-07-12 11:28:21.261038] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.185 [2024-07-12 11:28:21.261320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.185 [2024-07-12 11:28:21.261348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.185 [2024-07-12 11:28:21.267070] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.185 [2024-07-12 11:28:21.267360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.185 [2024-07-12 11:28:21.267388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.185 [2024-07-12 11:28:21.272118] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.185 [2024-07-12 11:28:21.272372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.185 [2024-07-12 11:28:21.272401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.185 [2024-07-12 11:28:21.276796] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.185 [2024-07-12 11:28:21.277056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.185 [2024-07-12 11:28:21.277084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.185 [2024-07-12 11:28:21.281899] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.185 [2024-07-12 11:28:21.282215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.185 [2024-07-12 11:28:21.282243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.185 [2024-07-12 11:28:21.288045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.185 [2024-07-12 11:28:21.288372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.185 [2024-07-12 11:28:21.288400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.185 [2024-07-12 11:28:21.293541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.185 [2024-07-12 11:28:21.293794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.185 [2024-07-12 11:28:21.293822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.185 [2024-07-12 11:28:21.299633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.185 [2024-07-12 11:28:21.299890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.185 [2024-07-12 11:28:21.299919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.185 [2024-07-12 11:28:21.304651] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.185 [2024-07-12 11:28:21.304911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.185 [2024-07-12 11:28:21.304939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.185 [2024-07-12 11:28:21.309300] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.185 [2024-07-12 11:28:21.309553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.185 [2024-07-12 11:28:21.309581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.185 [2024-07-12 11:28:21.314052] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.185 [2024-07-12 11:28:21.314309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.185 [2024-07-12 11:28:21.314338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.443 [2024-07-12 11:28:21.318750] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.443 [2024-07-12 11:28:21.319011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.443 [2024-07-12 11:28:21.319039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.443 [2024-07-12 11:28:21.323435] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.443 [2024-07-12 11:28:21.323690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.443 [2024-07-12 11:28:21.323718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.443 [2024-07-12 11:28:21.328739] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.443 [2024-07-12 11:28:21.328999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.443 [2024-07-12 11:28:21.329027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.443 [2024-07-12 11:28:21.333549] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.443 [2024-07-12 11:28:21.333803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.443 [2024-07-12 11:28:21.333839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.443 [2024-07-12 11:28:21.338178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.443 [2024-07-12 11:28:21.338430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.443 [2024-07-12 11:28:21.338459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.443 [2024-07-12 11:28:21.342680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.443 [2024-07-12 11:28:21.342943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.443 [2024-07-12 11:28:21.342971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.443 [2024-07-12 11:28:21.347189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.443 [2024-07-12 11:28:21.347440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.443 [2024-07-12 11:28:21.347468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.444 [2024-07-12 11:28:21.351684] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.444 [2024-07-12 11:28:21.351943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.444 [2024-07-12 11:28:21.351970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.444 [2024-07-12 11:28:21.356225] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.444 [2024-07-12 11:28:21.356476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.444 [2024-07-12 11:28:21.356504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.444 [2024-07-12 11:28:21.360778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.444 [2024-07-12 11:28:21.361039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.444 [2024-07-12 11:28:21.361067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.444 [2024-07-12 11:28:21.365420] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.444 [2024-07-12 11:28:21.365670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.444 [2024-07-12 11:28:21.365697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.444 [2024-07-12 11:28:21.370058] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.444 [2024-07-12 11:28:21.370313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.444 [2024-07-12 11:28:21.370354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.444 [2024-07-12 11:28:21.374637] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.444 [2024-07-12 11:28:21.374900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.444 [2024-07-12 11:28:21.374930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.444 [2024-07-12 11:28:21.379317] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.444 [2024-07-12 11:28:21.379571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.444 [2024-07-12 11:28:21.379613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.444 [2024-07-12 11:28:21.384073] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.444 [2024-07-12 11:28:21.384325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.444 [2024-07-12 11:28:21.384354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.444 [2024-07-12 11:28:21.388906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.444 [2024-07-12 11:28:21.389158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.444 [2024-07-12 11:28:21.389200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.444 [2024-07-12 11:28:21.393634] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.444 [2024-07-12 11:28:21.393906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.444 [2024-07-12 11:28:21.393948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.444 [2024-07-12 11:28:21.398323] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.444 [2024-07-12 11:28:21.398578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.444 [2024-07-12 11:28:21.398606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.444 [2024-07-12 11:28:21.402811] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.444 [2024-07-12 11:28:21.403069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.444 [2024-07-12 11:28:21.403097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.444 [2024-07-12 11:28:21.407345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.444 [2024-07-12 11:28:21.407597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.444 [2024-07-12 11:28:21.407625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.444 [2024-07-12 11:28:21.411856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.444 [2024-07-12 11:28:21.412116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.444 [2024-07-12 11:28:21.412144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.444 [2024-07-12 11:28:21.416565] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.444 [2024-07-12 11:28:21.416816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.444 [2024-07-12 11:28:21.416844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.444 [2024-07-12 11:28:21.421854] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.444 [2024-07-12 11:28:21.422115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.444 [2024-07-12 11:28:21.422142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.444 [2024-07-12 11:28:21.426553] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.444 [2024-07-12 11:28:21.426805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.444 [2024-07-12 11:28:21.426832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.444 [2024-07-12 11:28:21.431061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.444 [2024-07-12 11:28:21.431314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.444 [2024-07-12 11:28:21.431341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.444 [2024-07-12 11:28:21.435623] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.444 [2024-07-12 11:28:21.435885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.444 [2024-07-12 11:28:21.435912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.444 [2024-07-12 11:28:21.440110] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.444 [2024-07-12 11:28:21.440363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.444 [2024-07-12 11:28:21.440390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.444 [2024-07-12 11:28:21.444601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.444 [2024-07-12 11:28:21.444852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.444 [2024-07-12 11:28:21.444888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.444 [2024-07-12 11:28:21.449263] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.444 [2024-07-12 11:28:21.449515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.444 [2024-07-12 11:28:21.449558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.444 [2024-07-12 11:28:21.453857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.444 [2024-07-12 11:28:21.454116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.444 [2024-07-12 11:28:21.454151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.444 [2024-07-12 11:28:21.458712] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.444 [2024-07-12 11:28:21.458972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.444 [2024-07-12 11:28:21.459001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.444 [2024-07-12 11:28:21.463526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.444 [2024-07-12 11:28:21.463792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.444 [2024-07-12 11:28:21.463819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.444 [2024-07-12 11:28:21.468149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.444 [2024-07-12 11:28:21.468414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.444 [2024-07-12 11:28:21.468441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.444 [2024-07-12 11:28:21.472772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.444 [2024-07-12 11:28:21.473031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.444 [2024-07-12 11:28:21.473058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.444 [2024-07-12 11:28:21.477488] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.444 [2024-07-12 11:28:21.477741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.444 [2024-07-12 11:28:21.477769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.444 [2024-07-12 11:28:21.481993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.444 [2024-07-12 11:28:21.482244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.444 [2024-07-12 11:28:21.482272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.445 [2024-07-12 11:28:21.486488] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.445 [2024-07-12 11:28:21.486739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.445 [2024-07-12 11:28:21.486781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.445 [2024-07-12 11:28:21.491075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.445 [2024-07-12 11:28:21.491328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.445 [2024-07-12 11:28:21.491356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.445 [2024-07-12 11:28:21.495706] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.445 [2024-07-12 11:28:21.495971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.445 [2024-07-12 11:28:21.496000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.445 [2024-07-12 11:28:21.500248] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.445 [2024-07-12 11:28:21.500500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.445 [2024-07-12 11:28:21.500528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.445 [2024-07-12 11:28:21.504728] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.445 [2024-07-12 11:28:21.504986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.445 [2024-07-12 11:28:21.505014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.445 [2024-07-12 11:28:21.509223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.445 [2024-07-12 11:28:21.509475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.445 [2024-07-12 11:28:21.509503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.445 [2024-07-12 11:28:21.513811] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.445 [2024-07-12 11:28:21.514066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.445 [2024-07-12 11:28:21.514094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.445 [2024-07-12 11:28:21.518362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.445 [2024-07-12 11:28:21.518630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.445 [2024-07-12 11:28:21.518657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.445 [2024-07-12 11:28:21.522907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.445 [2024-07-12 11:28:21.523158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.445 [2024-07-12 11:28:21.523185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.445 [2024-07-12 11:28:21.527570] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.445 [2024-07-12 11:28:21.527824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.445 [2024-07-12 11:28:21.527852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.445 [2024-07-12 11:28:21.532234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.445 [2024-07-12 11:28:21.532502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.445 [2024-07-12 11:28:21.532544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.445 [2024-07-12 11:28:21.536936] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.445 [2024-07-12 11:28:21.537213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.445 [2024-07-12 11:28:21.537241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.445 [2024-07-12 11:28:21.541523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.445 [2024-07-12 11:28:21.541785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.445 [2024-07-12 11:28:21.541812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.445 [2024-07-12 11:28:21.546436] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.445 [2024-07-12 11:28:21.546687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.445 [2024-07-12 11:28:21.546716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.445 [2024-07-12 11:28:21.552282] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.445 [2024-07-12 11:28:21.552536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.445 [2024-07-12 11:28:21.552564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.445 [2024-07-12 11:28:21.558425] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.445 [2024-07-12 11:28:21.558719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.445 [2024-07-12 11:28:21.558747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.445 [2024-07-12 11:28:21.565345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.445 [2024-07-12 11:28:21.565620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.445 [2024-07-12 11:28:21.565664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.445 [2024-07-12 11:28:21.572123] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.445 [2024-07-12 11:28:21.572397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.445 [2024-07-12 11:28:21.572425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.758 [2024-07-12 11:28:21.578399] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.758 [2024-07-12 11:28:21.578679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.758 [2024-07-12 11:28:21.578707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.758 [2024-07-12 11:28:21.584976] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.758 [2024-07-12 11:28:21.585268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.758 [2024-07-12 11:28:21.585304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.758 [2024-07-12 11:28:21.591584] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.758 [2024-07-12 11:28:21.591879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.758 [2024-07-12 11:28:21.591908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.758 [2024-07-12 11:28:21.598457] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.758 [2024-07-12 11:28:21.598714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.758 [2024-07-12 11:28:21.598742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.758 [2024-07-12 11:28:21.605402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.758 [2024-07-12 11:28:21.605705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.758 [2024-07-12 11:28:21.605733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.758 [2024-07-12 11:28:21.611889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.758 [2024-07-12 11:28:21.612171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.758 [2024-07-12 11:28:21.612199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.758 [2024-07-12 11:28:21.618545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.758 [2024-07-12 11:28:21.618881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.758 [2024-07-12 11:28:21.618909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.758 [2024-07-12 11:28:21.625253] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.758 [2024-07-12 11:28:21.625562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.758 [2024-07-12 11:28:21.625590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.758 [2024-07-12 11:28:21.631955] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.758 [2024-07-12 11:28:21.632251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.758 [2024-07-12 11:28:21.632280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.758 [2024-07-12 11:28:21.638595] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.758 [2024-07-12 11:28:21.638877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.758 [2024-07-12 11:28:21.638905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.758 [2024-07-12 11:28:21.644856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.758 [2024-07-12 11:28:21.645096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.758 [2024-07-12 11:28:21.645125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.758 [2024-07-12 11:28:21.650009] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.758 [2024-07-12 11:28:21.650233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.758 [2024-07-12 11:28:21.650275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.758 [2024-07-12 11:28:21.654533] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.758 [2024-07-12 11:28:21.654756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.758 [2024-07-12 11:28:21.654798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.758 [2024-07-12 11:28:21.658988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.758 [2024-07-12 11:28:21.659198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.758 [2024-07-12 11:28:21.659226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.758 [2024-07-12 11:28:21.663185] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.758 [2024-07-12 11:28:21.663383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.758 [2024-07-12 11:28:21.663410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.758 [2024-07-12 11:28:21.667248] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.758 [2024-07-12 11:28:21.667442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.758 [2024-07-12 11:28:21.667470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.758 [2024-07-12 11:28:21.671501] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.758 [2024-07-12 11:28:21.671696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.758 [2024-07-12 11:28:21.671723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.758 [2024-07-12 11:28:21.675941] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.758 [2024-07-12 11:28:21.676136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.758 [2024-07-12 11:28:21.676164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.759 [2024-07-12 11:28:21.680619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.759 [2024-07-12 11:28:21.680816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.759 [2024-07-12 11:28:21.680843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.759 [2024-07-12 11:28:21.685279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.759 [2024-07-12 11:28:21.685474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.759 [2024-07-12 11:28:21.685502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.759 [2024-07-12 11:28:21.689909] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.759 [2024-07-12 11:28:21.690106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.759 [2024-07-12 11:28:21.690134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.759 [2024-07-12 11:28:21.694537] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.759 [2024-07-12 11:28:21.694736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.759 [2024-07-12 11:28:21.694764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.759 [2024-07-12 11:28:21.699022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.759 [2024-07-12 11:28:21.699217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.759 [2024-07-12 11:28:21.699244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.759 [2024-07-12 11:28:21.703601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.759 [2024-07-12 11:28:21.703798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.759 [2024-07-12 11:28:21.703826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.759 [2024-07-12 11:28:21.708154] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.759 [2024-07-12 11:28:21.708351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.759 [2024-07-12 11:28:21.708379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.759 [2024-07-12 11:28:21.712822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.759 [2024-07-12 11:28:21.713030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.759 [2024-07-12 11:28:21.713058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.759 [2024-07-12 11:28:21.717349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.759 [2024-07-12 11:28:21.717544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.759 [2024-07-12 11:28:21.717572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.759 [2024-07-12 11:28:21.722608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.759 [2024-07-12 11:28:21.722838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.759 [2024-07-12 11:28:21.722885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.759 [2024-07-12 11:28:21.726881] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.759 [2024-07-12 11:28:21.727077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.759 [2024-07-12 11:28:21.727105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.759 [2024-07-12 11:28:21.730968] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.759 [2024-07-12 11:28:21.731162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.759 [2024-07-12 11:28:21.731189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.759 [2024-07-12 11:28:21.735002] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.759 [2024-07-12 11:28:21.735196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.759 [2024-07-12 11:28:21.735224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.759 [2024-07-12 11:28:21.739062] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.759 [2024-07-12 11:28:21.739257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.759 [2024-07-12 11:28:21.739284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.759 [2024-07-12 11:28:21.743178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc34af0) with pdu=0x2000190fef90 00:23:55.759 [2024-07-12 11:28:21.743373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.759 [2024-07-12 11:28:21.743401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.759 00:23:55.759 Latency(us) 00:23:55.759 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.759 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:55.759 nvme0n1 : 2.00 5603.55 700.44 0.00 0.00 2848.54 1832.58 7670.14 00:23:55.759 =================================================================================================================== 00:23:55.759 Total : 5603.55 700.44 0.00 0.00 2848.54 1832.58 7670.14 00:23:55.759 0 00:23:55.759 11:28:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:55.759 11:28:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:55.759 11:28:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:55.759 | .driver_specific 00:23:55.759 | .nvme_error 00:23:55.759 | .status_code 00:23:55.759 | .command_transient_transport_error' 00:23:55.759 11:28:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:56.016 11:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 361 > 0 )) 00:23:56.016 11:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 674234 00:23:56.016 11:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 674234 ']' 00:23:56.016 11:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 674234 00:23:56.016 11:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:23:56.016 11:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:56.016 11:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 674234 00:23:56.016 11:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:56.016 11:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:56.016 11:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 674234' 00:23:56.016 killing process with pid 674234 00:23:56.016 11:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 674234 00:23:56.016 Received shutdown signal, test time was about 2.000000 seconds 00:23:56.016 00:23:56.016 Latency(us) 00:23:56.016 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.016 =================================================================================================================== 00:23:56.016 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:56.016 11:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 674234 00:23:56.273 11:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 672918 00:23:56.273 11:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 672918 ']' 00:23:56.273 11:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 672918 00:23:56.273 11:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:23:56.273 11:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:56.273 11:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 672918 00:23:56.273 11:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:56.273 11:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:56.273 11:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 672918' 00:23:56.273 killing process with pid 672918 00:23:56.273 11:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 672918 00:23:56.273 11:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 672918 00:23:56.531 00:23:56.531 real 0m15.484s 00:23:56.531 user 0m30.891s 00:23:56.531 sys 0m4.181s 00:23:56.531 11:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:56.531 11:28:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:56.531 ************************************ 00:23:56.531 END TEST nvmf_digest_error 00:23:56.531 ************************************ 00:23:56.531 11:28:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:23:56.531 11:28:22 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:23:56.531 11:28:22 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:23:56.531 11:28:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:56.531 11:28:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:23:56.531 11:28:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:56.531 11:28:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:23:56.531 11:28:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:56.531 11:28:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:56.531 rmmod nvme_tcp 00:23:56.531 rmmod nvme_fabrics 00:23:56.789 rmmod nvme_keyring 00:23:56.789 11:28:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:56.789 11:28:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:23:56.789 11:28:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:23:56.789 11:28:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 672918 ']' 00:23:56.789 11:28:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 672918 00:23:56.790 11:28:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 672918 ']' 00:23:56.790 11:28:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 672918 00:23:56.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (672918) - No such process 00:23:56.790 11:28:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 672918 is not found' 00:23:56.790 Process with pid 672918 is not found 00:23:56.790 11:28:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:56.790 11:28:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:56.790 11:28:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:56.790 11:28:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:56.790 11:28:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:56.790 11:28:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.790 11:28:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:56.790 11:28:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.690 11:28:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:58.690 00:23:58.690 real 0m35.291s 00:23:58.690 user 1m2.331s 00:23:58.690 sys 0m9.910s 00:23:58.690 11:28:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:58.690 11:28:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:58.690 ************************************ 00:23:58.690 END TEST nvmf_digest 00:23:58.690 ************************************ 00:23:58.690 11:28:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:58.690 11:28:24 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:23:58.690 11:28:24 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:23:58.690 11:28:24 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:23:58.690 11:28:24 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:23:58.690 11:28:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:58.690 11:28:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:58.690 11:28:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:58.690 ************************************ 00:23:58.690 START TEST nvmf_bdevperf 00:23:58.690 ************************************ 00:23:58.690 11:28:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:23:58.690 * Looking for test storage... 00:23:58.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:58.690 11:28:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:58.690 11:28:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:23:58.690 11:28:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:58.690 11:28:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:58.690 11:28:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:58.690 11:28:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:58.690 11:28:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:23:58.947 11:28:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:00.847 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:00.847 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:00.848 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:00.848 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:00.848 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:00.848 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:00.848 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:01.106 11:28:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:01.106 11:28:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:01.106 11:28:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:01.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:01.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:24:01.106 00:24:01.106 --- 10.0.0.2 ping statistics --- 00:24:01.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.107 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:24:01.107 11:28:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:01.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:01.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:24:01.107 00:24:01.107 --- 10.0.0.1 ping statistics --- 00:24:01.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.107 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:24:01.107 11:28:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:01.107 11:28:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:24:01.107 11:28:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:01.107 11:28:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:01.107 11:28:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:01.107 11:28:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:01.107 11:28:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:01.107 11:28:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:01.107 11:28:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:01.107 11:28:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:24:01.107 11:28:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:01.107 11:28:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:01.107 11:28:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:01.107 11:28:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:01.107 11:28:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=676662 00:24:01.107 11:28:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:01.107 11:28:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 676662 00:24:01.107 11:28:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 676662 ']' 00:24:01.107 11:28:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.107 11:28:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:01.107 11:28:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.107 11:28:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:01.107 11:28:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:01.107 [2024-07-12 11:28:27.097717] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:24:01.107 [2024-07-12 11:28:27.097805] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:01.107 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.107 [2024-07-12 11:28:27.162678] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:01.365 [2024-07-12 11:28:27.272550] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:01.365 [2024-07-12 11:28:27.272595] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:01.365 [2024-07-12 11:28:27.272624] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:01.365 [2024-07-12 11:28:27.272636] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:01.365 [2024-07-12 11:28:27.272645] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:01.365 [2024-07-12 11:28:27.272731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:01.365 [2024-07-12 11:28:27.273002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:01.365 [2024-07-12 11:28:27.273007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.365 11:28:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:01.365 11:28:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:24:01.365 11:28:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:01.365 11:28:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:01.365 11:28:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:01.365 11:28:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.365 11:28:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:01.365 11:28:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.365 11:28:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:01.365 [2024-07-12 11:28:27.401278] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:01.365 11:28:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.365 11:28:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:01.365 11:28:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.365 11:28:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:01.365 Malloc0 00:24:01.365 11:28:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.365 11:28:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:01.365 11:28:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.365 11:28:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:01.365 11:28:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.365 11:28:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:01.365 11:28:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.365 11:28:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:01.365 11:28:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.365 11:28:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:01.365 11:28:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.365 11:28:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:01.365 [2024-07-12 11:28:27.463381] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:01.365 11:28:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.365 11:28:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:24:01.365 11:28:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:24:01.365 11:28:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:01.365 11:28:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:01.365 11:28:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:01.365 11:28:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:01.365 { 00:24:01.365 "params": { 00:24:01.365 "name": "Nvme$subsystem", 00:24:01.365 "trtype": "$TEST_TRANSPORT", 00:24:01.365 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:01.365 "adrfam": "ipv4", 00:24:01.365 "trsvcid": "$NVMF_PORT", 00:24:01.365 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:01.365 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:01.365 "hdgst": ${hdgst:-false}, 00:24:01.365 "ddgst": ${ddgst:-false} 00:24:01.365 }, 00:24:01.365 "method": "bdev_nvme_attach_controller" 00:24:01.365 } 00:24:01.365 EOF 00:24:01.365 )") 00:24:01.365 11:28:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:01.365 11:28:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:01.365 11:28:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:01.365 11:28:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:01.365 "params": { 00:24:01.365 "name": "Nvme1", 00:24:01.365 "trtype": "tcp", 00:24:01.365 "traddr": "10.0.0.2", 00:24:01.365 "adrfam": "ipv4", 00:24:01.365 "trsvcid": "4420", 00:24:01.365 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.365 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:01.365 "hdgst": false, 00:24:01.365 "ddgst": false 00:24:01.365 }, 00:24:01.365 "method": "bdev_nvme_attach_controller" 00:24:01.365 }' 00:24:01.623 [2024-07-12 11:28:27.508407] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:24:01.623 [2024-07-12 11:28:27.508479] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid676691 ] 00:24:01.623 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.623 [2024-07-12 11:28:27.567980] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.623 [2024-07-12 11:28:27.676954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.881 Running I/O for 1 seconds... 00:24:02.814 00:24:02.814 Latency(us) 00:24:02.814 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:02.814 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:02.814 Verification LBA range: start 0x0 length 0x4000 00:24:02.814 Nvme1n1 : 1.01 8675.73 33.89 0.00 0.00 14692.75 1784.04 14175.19 00:24:02.814 =================================================================================================================== 00:24:02.814 Total : 8675.73 33.89 0.00 0.00 14692.75 1784.04 14175.19 00:24:03.071 11:28:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=676940 00:24:03.071 11:28:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:24:03.071 11:28:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:24:03.071 11:28:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:24:03.071 11:28:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:03.071 11:28:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:03.071 11:28:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:03.071 11:28:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:03.071 { 00:24:03.071 "params": { 00:24:03.071 "name": "Nvme$subsystem", 00:24:03.071 "trtype": "$TEST_TRANSPORT", 00:24:03.071 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:03.071 "adrfam": "ipv4", 00:24:03.071 "trsvcid": "$NVMF_PORT", 00:24:03.071 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:03.071 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:03.071 "hdgst": ${hdgst:-false}, 00:24:03.071 "ddgst": ${ddgst:-false} 00:24:03.071 }, 00:24:03.071 "method": "bdev_nvme_attach_controller" 00:24:03.071 } 00:24:03.071 EOF 00:24:03.071 )") 00:24:03.071 11:28:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:03.071 11:28:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:03.071 11:28:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:03.072 11:28:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:03.072 "params": { 00:24:03.072 "name": "Nvme1", 00:24:03.072 "trtype": "tcp", 00:24:03.072 "traddr": "10.0.0.2", 00:24:03.072 "adrfam": "ipv4", 00:24:03.072 "trsvcid": "4420", 00:24:03.072 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.072 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:03.072 "hdgst": false, 00:24:03.072 "ddgst": false 00:24:03.072 }, 00:24:03.072 "method": "bdev_nvme_attach_controller" 00:24:03.072 }' 00:24:03.072 [2024-07-12 11:28:29.202963] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:24:03.072 [2024-07-12 11:28:29.203047] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid676940 ] 00:24:03.330 EAL: No free 2048 kB hugepages reported on node 1 00:24:03.330 [2024-07-12 11:28:29.264184] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.330 [2024-07-12 11:28:29.371548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.591 Running I/O for 15 seconds... 00:24:06.128 11:28:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 676662 00:24:06.128 11:28:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:24:06.128 [2024-07-12 11:28:32.173194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:38832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.128 [2024-07-12 11:28:32.173262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.128 [2024-07-12 11:28:32.173309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.128 [2024-07-12 11:28:32.173328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.128 [2024-07-12 11:28:32.173346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.128 [2024-07-12 11:28:32.173361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.128 [2024-07-12 11:28:32.173378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:38856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.128 [2024-07-12 11:28:32.173393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.128 [2024-07-12 11:28:32.173409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:38864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.128 [2024-07-12 11:28:32.173424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.128 [2024-07-12 11:28:32.173455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:38872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.128 [2024-07-12 11:28:32.173470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.128 [2024-07-12 11:28:32.173487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.128 [2024-07-12 11:28:32.173501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.128 [2024-07-12 11:28:32.173517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:38888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.128 [2024-07-12 11:28:32.173546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.129 [2024-07-12 11:28:32.173568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:38896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.129 [2024-07-12 11:28:32.173581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.129 [2024-07-12 11:28:32.173596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:38904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.129 [2024-07-12 11:28:32.173625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.129 [2024-07-12 11:28:32.173641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.129 [2024-07-12 11:28:32.173655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.129 [2024-07-12 11:28:32.173670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:38920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.129 [2024-07-12 11:28:32.173683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.129 [2024-07-12 11:28:32.173698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:38928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.129 [2024-07-12 11:28:32.173712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.129 [2024-07-12 11:28:32.173727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:38936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.129 [2024-07-12 11:28:32.173755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.129 [2024-07-12 11:28:32.173771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:38944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.129 [2024-07-12 11:28:32.173784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.129 [2024-07-12 11:28:32.173799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.129 [2024-07-12 11:28:32.173812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.129 [2024-07-12 11:28:32.173827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.129 [2024-07-12 11:28:32.173840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.129 [2024-07-12 11:28:32.173856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.129 [2024-07-12 11:28:32.173892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.129 [2024-07-12 11:28:32.173914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.129 [2024-07-12 11:28:32.173929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.129 [2024-07-12 11:28:32.173944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.129 [2024-07-12 11:28:32.173957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.129 [2024-07-12 11:28:32.173973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.129 [2024-07-12 11:28:32.173994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.129 [2024-07-12 11:28:32.174009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.129 [2024-07-12 11:28:32.174023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.129 [2024-07-12 11:28:32.174043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.129 [2024-07-12 11:28:32.174057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.129 [2024-07-12 11:28:32.174072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.129 [2024-07-12 11:28:32.174085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.129 [2024-07-12 11:28:32.174101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.129 [2024-07-12 11:28:32.174114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.129 [2024-07-12 11:28:32.174129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.129 [2024-07-12 11:28:32.174158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.129 [2024-07-12 11:28:32.174174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.129 [2024-07-12 11:28:32.174187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.129 [2024-07-12 11:28:32.174201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.129 [2024-07-12 11:28:32.174229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.129 [2024-07-12 11:28:32.174243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.129 [2024-07-12 11:28:32.174256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.129 [2024-07-12 11:28:32.174270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.129 [2024-07-12 11:28:32.174282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.129 [2024-07-12 11:28:32.174295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.129 [2024-07-12 11:28:32.174308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.129 [2024-07-12 11:28:32.174322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.129 [2024-07-12 11:28:32.174334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.129 [2024-07-12 11:28:32.174348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.129 [2024-07-12 11:28:32.174360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.129 [2024-07-12 11:28:32.174375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.129 [2024-07-12 11:28:32.174388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.129 [2024-07-12 11:28:32.174402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.129 [2024-07-12 11:28:32.174419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.129 [2024-07-12 11:28:32.174433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.129 [2024-07-12 11:28:32.174446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.129 [2024-07-12 11:28:32.174460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.129 [2024-07-12 11:28:32.174472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.129 [2024-07-12 11:28:32.174503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.129 [2024-07-12 11:28:32.174516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.129 [2024-07-12 11:28:32.174531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.129 [2024-07-12 11:28:32.174544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.129 [2024-07-12 11:28:32.174559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.129 [2024-07-12 11:28:32.174572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.129 [2024-07-12 11:28:32.174586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.129 [2024-07-12 11:28:32.174599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.129 [2024-07-12 11:28:32.174613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:39800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.129 [2024-07-12 11:28:32.174626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.129 [2024-07-12 11:28:32.174640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.129 [2024-07-12 11:28:32.174653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.129 [2024-07-12 11:28:32.174682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.129 [2024-07-12 11:28:32.174695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.129 [2024-07-12 11:28:32.174709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.129 [2024-07-12 11:28:32.174721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.129 [2024-07-12 11:28:32.174735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.129 [2024-07-12 11:28:32.174748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.129 [2024-07-12 11:28:32.174762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.129 [2024-07-12 11:28:32.174774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.129 [2024-07-12 11:28:32.174788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.129 [2024-07-12 11:28:32.174804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.129 [2024-07-12 11:28:32.174818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.129 [2024-07-12 11:28:32.174831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.129 [2024-07-12 11:28:32.174874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.129 [2024-07-12 11:28:32.174891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.174926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:38976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.174941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.174956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.174970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.174985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:38992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.174999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.175014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:39000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.175028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.175043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:39008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.175057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.175071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:39016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.175085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.175103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.175129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.175144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:39032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.175173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.175188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:39040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.175202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.175231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:39048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.175244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.175262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:39056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.175275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.175289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:39064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.175302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.175315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:39072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.175343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.175357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.175371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.175386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:39088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.175414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.175435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.175449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.175478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:39104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.175493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.175508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:39112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.175522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.175537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:39120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.175550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.175566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:39128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.175579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.175595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:39136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.175608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.175623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.175636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.175651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.175669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.175685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.175699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.175715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.175729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.175744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.175758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.175773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.175787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.175802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.175815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.175845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:39200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.175858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.175879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:39208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.175916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.175933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:39216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.175947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.175968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.175983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.175999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:39232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.176013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.176028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:39240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.176043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.176058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:39248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.176072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.176092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.176106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.176122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:39264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.176136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.176162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.176177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.176206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.176220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.176235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.176248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.176278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:39296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.176291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.176305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:39304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.176319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.130 [2024-07-12 11:28:32.176333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:39312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.130 [2024-07-12 11:28:32.176347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.131 [2024-07-12 11:28:32.176361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:39320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.131 [2024-07-12 11:28:32.176389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.131 [2024-07-12 11:28:32.176404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:39328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.131 [2024-07-12 11:28:32.176417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.131 [2024-07-12 11:28:32.176431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:39336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.131 [2024-07-12 11:28:32.176443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.131 [2024-07-12 11:28:32.176457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.131 [2024-07-12 11:28:32.176469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.131 [2024-07-12 11:28:32.176488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:39352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.131 [2024-07-12 11:28:32.176504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.131 [2024-07-12 11:28:32.176534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:39360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.131 [2024-07-12 11:28:32.176547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.131 [2024-07-12 11:28:32.176561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.131 [2024-07-12 11:28:32.176589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.131 [2024-07-12 11:28:32.176605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.131 [2024-07-12 11:28:32.176619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.131 [2024-07-12 11:28:32.176634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:39384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.131 [2024-07-12 11:28:32.176648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.131 [2024-07-12 11:28:32.176663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.131 [2024-07-12 11:28:32.176676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.131 [2024-07-12 11:28:32.176691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.131 [2024-07-12 11:28:32.176705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.131 [2024-07-12 11:28:32.176720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:39400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.131 [2024-07-12 11:28:32.176733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.131 [2024-07-12 11:28:32.176748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:39408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.131 [2024-07-12 11:28:32.176762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.131 [2024-07-12 11:28:32.176777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:39416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.131 [2024-07-12 11:28:32.176790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.131 [2024-07-12 11:28:32.176805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:39424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.131 [2024-07-12 11:28:32.176818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.131 [2024-07-12 11:28:32.176833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.131 [2024-07-12 11:28:32.176847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.131 [2024-07-12 11:28:32.176862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:39440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.131 [2024-07-12 11:28:32.176883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.131 [2024-07-12 11:28:32.176910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:39448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.131 [2024-07-12 11:28:32.176930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.131 [2024-07-12 11:28:32.176946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:39456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.131 [2024-07-12 11:28:32.176960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.131 [2024-07-12 11:28:32.176975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:39464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.131 [2024-07-12 11:28:32.176989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.131 [2024-07-12 11:28:32.177009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:39472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.131 [2024-07-12 11:28:32.177023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.131 [2024-07-12 11:28:32.177039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.131 [2024-07-12 11:28:32.177052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.131 [2024-07-12 11:28:32.177067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:39488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.131 [2024-07-12 11:28:32.177081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.131 [2024-07-12 11:28:32.177096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.131 [2024-07-12 11:28:32.177109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.131 [2024-07-12 11:28:32.177125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:39504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.131 [2024-07-12 11:28:32.177138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.131 [2024-07-12 11:28:32.177168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:39512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.131 [2024-07-12 11:28:32.177181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.131 [2024-07-12 11:28:32.177196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:39520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.131 [2024-07-12 11:28:32.177209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.131 [2024-07-12 11:28:32.177223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:39528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.131 [2024-07-12 11:28:32.177236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.131 [2024-07-12 11:28:32.177267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:39536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.131 [2024-07-12 11:28:32.177281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.131 [2024-07-12 11:28:32.177296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:39544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.131 [2024-07-12 11:28:32.177309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.131 [2024-07-12 11:28:32.177328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:39552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.131 [2024-07-12 11:28:32.177342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.131 [2024-07-12 11:28:32.177357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.131 [2024-07-12 11:28:32.177371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.131 [2024-07-12 11:28:32.177386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.131 [2024-07-12 11:28:32.177399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.131 [2024-07-12 11:28:32.177414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:39576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.131 [2024-07-12 11:28:32.177428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.131 [2024-07-12 11:28:32.177442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8254c0 is same with the state(5) to be set 00:24:06.131 [2024-07-12 11:28:32.177458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.131 [2024-07-12 11:28:32.177469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.131 [2024-07-12 11:28:32.177481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39584 len:8 PRP1 0x0 PRP2 0x0 00:24:06.131 [2024-07-12 11:28:32.177498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.131 [2024-07-12 11:28:32.177565] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8254c0 was disconnected and freed. reset controller. 00:24:06.131 [2024-07-12 11:28:32.177636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:06.131 [2024-07-12 11:28:32.177658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.131 [2024-07-12 11:28:32.177673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:06.131 [2024-07-12 11:28:32.177686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.131 [2024-07-12 11:28:32.177700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:06.131 [2024-07-12 11:28:32.177713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.131 [2024-07-12 11:28:32.177726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:06.131 [2024-07-12 11:28:32.177739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.131 [2024-07-12 11:28:32.177751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.131 [2024-07-12 11:28:32.181298] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.131 [2024-07-12 11:28:32.181352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.131 [2024-07-12 11:28:32.181909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.132 [2024-07-12 11:28:32.181938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.132 [2024-07-12 11:28:32.181960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.132 [2024-07-12 11:28:32.182182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.132 [2024-07-12 11:28:32.182398] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.132 [2024-07-12 11:28:32.182419] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.132 [2024-07-12 11:28:32.182434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.132 [2024-07-12 11:28:32.185677] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.132 [2024-07-12 11:28:32.194774] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.132 [2024-07-12 11:28:32.195176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.132 [2024-07-12 11:28:32.195218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.132 [2024-07-12 11:28:32.195233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.132 [2024-07-12 11:28:32.195474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.132 [2024-07-12 11:28:32.195666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.132 [2024-07-12 11:28:32.195684] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.132 [2024-07-12 11:28:32.195696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.132 [2024-07-12 11:28:32.198580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.132 [2024-07-12 11:28:32.207764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.132 [2024-07-12 11:28:32.208198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.132 [2024-07-12 11:28:32.208226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.132 [2024-07-12 11:28:32.208242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.132 [2024-07-12 11:28:32.208467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.132 [2024-07-12 11:28:32.208674] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.132 [2024-07-12 11:28:32.208692] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.132 [2024-07-12 11:28:32.208704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.132 [2024-07-12 11:28:32.211648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.132 [2024-07-12 11:28:32.220881] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.132 [2024-07-12 11:28:32.221375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.132 [2024-07-12 11:28:32.221416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.132 [2024-07-12 11:28:32.221433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.132 [2024-07-12 11:28:32.221702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.132 [2024-07-12 11:28:32.221937] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.132 [2024-07-12 11:28:32.221962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.132 [2024-07-12 11:28:32.221975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.132 [2024-07-12 11:28:32.224860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.132 [2024-07-12 11:28:32.234194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.132 [2024-07-12 11:28:32.234585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.132 [2024-07-12 11:28:32.234612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.132 [2024-07-12 11:28:32.234627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.132 [2024-07-12 11:28:32.234841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.132 [2024-07-12 11:28:32.235078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.132 [2024-07-12 11:28:32.235098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.132 [2024-07-12 11:28:32.235110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.132 [2024-07-12 11:28:32.238113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.132 [2024-07-12 11:28:32.247543] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.132 [2024-07-12 11:28:32.247937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.132 [2024-07-12 11:28:32.247980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.132 [2024-07-12 11:28:32.247997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.132 [2024-07-12 11:28:32.248224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.132 [2024-07-12 11:28:32.248431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.132 [2024-07-12 11:28:32.248450] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.132 [2024-07-12 11:28:32.248461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.132 [2024-07-12 11:28:32.251353] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.391 [2024-07-12 11:28:32.261105] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.391 [2024-07-12 11:28:32.261461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.391 [2024-07-12 11:28:32.261505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.391 [2024-07-12 11:28:32.261520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.391 [2024-07-12 11:28:32.261774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.391 [2024-07-12 11:28:32.262041] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.391 [2024-07-12 11:28:32.262062] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.391 [2024-07-12 11:28:32.262076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.391 [2024-07-12 11:28:32.265057] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.391 [2024-07-12 11:28:32.274309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.391 [2024-07-12 11:28:32.274659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.391 [2024-07-12 11:28:32.274701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.391 [2024-07-12 11:28:32.274717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.391 [2024-07-12 11:28:32.274979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.391 [2024-07-12 11:28:32.275230] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.391 [2024-07-12 11:28:32.275249] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.391 [2024-07-12 11:28:32.275261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.391 [2024-07-12 11:28:32.278485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.391 [2024-07-12 11:28:32.287432] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.391 [2024-07-12 11:28:32.287885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.391 [2024-07-12 11:28:32.287922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.391 [2024-07-12 11:28:32.287939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.391 [2024-07-12 11:28:32.288170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.391 [2024-07-12 11:28:32.288380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.391 [2024-07-12 11:28:32.288398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.391 [2024-07-12 11:28:32.288410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.391 [2024-07-12 11:28:32.291307] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.391 [2024-07-12 11:28:32.300685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.391 [2024-07-12 11:28:32.301097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.391 [2024-07-12 11:28:32.301149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.391 [2024-07-12 11:28:32.301165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.391 [2024-07-12 11:28:32.301415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.391 [2024-07-12 11:28:32.301607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.391 [2024-07-12 11:28:32.301626] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.391 [2024-07-12 11:28:32.301637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.391 [2024-07-12 11:28:32.304544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.391 [2024-07-12 11:28:32.313930] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.391 [2024-07-12 11:28:32.314255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.391 [2024-07-12 11:28:32.314282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.391 [2024-07-12 11:28:32.314297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.391 [2024-07-12 11:28:32.314525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.391 [2024-07-12 11:28:32.314738] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.391 [2024-07-12 11:28:32.314757] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.391 [2024-07-12 11:28:32.314770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.391 [2024-07-12 11:28:32.317669] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.391 [2024-07-12 11:28:32.327227] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.391 [2024-07-12 11:28:32.327602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.391 [2024-07-12 11:28:32.327629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.391 [2024-07-12 11:28:32.327645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.391 [2024-07-12 11:28:32.327897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.391 [2024-07-12 11:28:32.328102] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.391 [2024-07-12 11:28:32.328122] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.391 [2024-07-12 11:28:32.328135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.391 [2024-07-12 11:28:32.331052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.391 [2024-07-12 11:28:32.340356] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.391 [2024-07-12 11:28:32.340782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.391 [2024-07-12 11:28:32.340831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.391 [2024-07-12 11:28:32.340847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.391 [2024-07-12 11:28:32.341121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.392 [2024-07-12 11:28:32.341320] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.392 [2024-07-12 11:28:32.341338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.392 [2024-07-12 11:28:32.341351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.392 [2024-07-12 11:28:32.344247] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.392 [2024-07-12 11:28:32.353612] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.392 [2024-07-12 11:28:32.353940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.392 [2024-07-12 11:28:32.353968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.392 [2024-07-12 11:28:32.353984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.392 [2024-07-12 11:28:32.354205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.392 [2024-07-12 11:28:32.354419] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.392 [2024-07-12 11:28:32.354438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.392 [2024-07-12 11:28:32.354455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.392 [2024-07-12 11:28:32.357389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.392 [2024-07-12 11:28:32.366756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.392 [2024-07-12 11:28:32.367135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.392 [2024-07-12 11:28:32.367162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.392 [2024-07-12 11:28:32.367179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.392 [2024-07-12 11:28:32.367419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.392 [2024-07-12 11:28:32.367632] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.392 [2024-07-12 11:28:32.367651] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.392 [2024-07-12 11:28:32.367663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.392 [2024-07-12 11:28:32.370575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.392 [2024-07-12 11:28:32.380228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.392 [2024-07-12 11:28:32.380631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.392 [2024-07-12 11:28:32.380666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.392 [2024-07-12 11:28:32.380698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.392 [2024-07-12 11:28:32.380951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.392 [2024-07-12 11:28:32.381169] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.392 [2024-07-12 11:28:32.381205] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.392 [2024-07-12 11:28:32.381218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.392 [2024-07-12 11:28:32.384372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.392 [2024-07-12 11:28:32.393450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.392 [2024-07-12 11:28:32.393882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.392 [2024-07-12 11:28:32.393933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.392 [2024-07-12 11:28:32.393949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.392 [2024-07-12 11:28:32.394162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.392 [2024-07-12 11:28:32.394374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.392 [2024-07-12 11:28:32.394392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.392 [2024-07-12 11:28:32.394404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.392 [2024-07-12 11:28:32.397464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.392 [2024-07-12 11:28:32.406630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.392 [2024-07-12 11:28:32.406961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.392 [2024-07-12 11:28:32.406990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.392 [2024-07-12 11:28:32.407006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.392 [2024-07-12 11:28:32.407248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.392 [2024-07-12 11:28:32.407455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.392 [2024-07-12 11:28:32.407473] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.392 [2024-07-12 11:28:32.407485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.392 [2024-07-12 11:28:32.410486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.392 [2024-07-12 11:28:32.419801] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.392 [2024-07-12 11:28:32.420208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.392 [2024-07-12 11:28:32.420269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.392 [2024-07-12 11:28:32.420284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.392 [2024-07-12 11:28:32.420518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.392 [2024-07-12 11:28:32.420710] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.392 [2024-07-12 11:28:32.420729] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.392 [2024-07-12 11:28:32.420740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.392 [2024-07-12 11:28:32.423658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.392 [2024-07-12 11:28:32.433111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.392 [2024-07-12 11:28:32.433520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.392 [2024-07-12 11:28:32.433545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.392 [2024-07-12 11:28:32.433559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.392 [2024-07-12 11:28:32.433789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.392 [2024-07-12 11:28:32.434033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.392 [2024-07-12 11:28:32.434054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.392 [2024-07-12 11:28:32.434068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.392 [2024-07-12 11:28:32.437042] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.392 [2024-07-12 11:28:32.446214] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.392 [2024-07-12 11:28:32.446591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.392 [2024-07-12 11:28:32.446616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.392 [2024-07-12 11:28:32.446632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.392 [2024-07-12 11:28:32.446847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.392 [2024-07-12 11:28:32.447080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.392 [2024-07-12 11:28:32.447100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.392 [2024-07-12 11:28:32.447113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.392 [2024-07-12 11:28:32.450013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.392 [2024-07-12 11:28:32.459247] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.392 [2024-07-12 11:28:32.459587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.392 [2024-07-12 11:28:32.459614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.392 [2024-07-12 11:28:32.459630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.392 [2024-07-12 11:28:32.459850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.392 [2024-07-12 11:28:32.460089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.392 [2024-07-12 11:28:32.460110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.392 [2024-07-12 11:28:32.460122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.392 [2024-07-12 11:28:32.463064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.392 [2024-07-12 11:28:32.472287] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.392 [2024-07-12 11:28:32.472650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.392 [2024-07-12 11:28:32.472677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.392 [2024-07-12 11:28:32.472693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.392 [2024-07-12 11:28:32.472941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.392 [2024-07-12 11:28:32.473183] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.392 [2024-07-12 11:28:32.473203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.392 [2024-07-12 11:28:32.473216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.392 [2024-07-12 11:28:32.476133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.392 [2024-07-12 11:28:32.485401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.392 [2024-07-12 11:28:32.485767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.392 [2024-07-12 11:28:32.485794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.392 [2024-07-12 11:28:32.485810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.393 [2024-07-12 11:28:32.486063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.393 [2024-07-12 11:28:32.486297] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.393 [2024-07-12 11:28:32.486316] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.393 [2024-07-12 11:28:32.486328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.393 [2024-07-12 11:28:32.489220] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.393 [2024-07-12 11:28:32.498470] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.393 [2024-07-12 11:28:32.498788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.393 [2024-07-12 11:28:32.498814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.393 [2024-07-12 11:28:32.498829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.393 [2024-07-12 11:28:32.499079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.393 [2024-07-12 11:28:32.499309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.393 [2024-07-12 11:28:32.499327] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.393 [2024-07-12 11:28:32.499339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.393 [2024-07-12 11:28:32.502227] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.393 [2024-07-12 11:28:32.511473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.393 [2024-07-12 11:28:32.511962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.393 [2024-07-12 11:28:32.512005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.393 [2024-07-12 11:28:32.512022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.393 [2024-07-12 11:28:32.512272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.393 [2024-07-12 11:28:32.512479] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.393 [2024-07-12 11:28:32.512497] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.393 [2024-07-12 11:28:32.512509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.393 [2024-07-12 11:28:32.515321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.652 [2024-07-12 11:28:32.525012] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.652 [2024-07-12 11:28:32.525343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.652 [2024-07-12 11:28:32.525385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.652 [2024-07-12 11:28:32.525401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.652 [2024-07-12 11:28:32.525622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.652 [2024-07-12 11:28:32.525836] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.652 [2024-07-12 11:28:32.525856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.652 [2024-07-12 11:28:32.525893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.652 [2024-07-12 11:28:32.529187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.652 [2024-07-12 11:28:32.538310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.652 [2024-07-12 11:28:32.538693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.652 [2024-07-12 11:28:32.538733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.652 [2024-07-12 11:28:32.538754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.652 [2024-07-12 11:28:32.539007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.652 [2024-07-12 11:28:32.539256] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.652 [2024-07-12 11:28:32.539275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.652 [2024-07-12 11:28:32.539287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.652 [2024-07-12 11:28:32.542214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.652 [2024-07-12 11:28:32.551372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.652 [2024-07-12 11:28:32.551732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.652 [2024-07-12 11:28:32.551758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.652 [2024-07-12 11:28:32.551774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.652 [2024-07-12 11:28:32.552023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.652 [2024-07-12 11:28:32.552269] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.652 [2024-07-12 11:28:32.552287] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.652 [2024-07-12 11:28:32.552299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.652 [2024-07-12 11:28:32.555070] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.652 [2024-07-12 11:28:32.564446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.652 [2024-07-12 11:28:32.564773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.652 [2024-07-12 11:28:32.564799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.652 [2024-07-12 11:28:32.564813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.652 [2024-07-12 11:28:32.565077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.652 [2024-07-12 11:28:32.565324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.652 [2024-07-12 11:28:32.565343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.652 [2024-07-12 11:28:32.565354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.652 [2024-07-12 11:28:32.568285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.652 [2024-07-12 11:28:32.577531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.652 [2024-07-12 11:28:32.577897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.652 [2024-07-12 11:28:32.577939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.652 [2024-07-12 11:28:32.577956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.652 [2024-07-12 11:28:32.578209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.652 [2024-07-12 11:28:32.578416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.652 [2024-07-12 11:28:32.578438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.652 [2024-07-12 11:28:32.578450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.652 [2024-07-12 11:28:32.581391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.652 [2024-07-12 11:28:32.590681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.652 [2024-07-12 11:28:32.591054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.652 [2024-07-12 11:28:32.591081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.652 [2024-07-12 11:28:32.591097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.652 [2024-07-12 11:28:32.591331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.652 [2024-07-12 11:28:32.591539] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.652 [2024-07-12 11:28:32.591558] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.652 [2024-07-12 11:28:32.591570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.652 [2024-07-12 11:28:32.594505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.652 [2024-07-12 11:28:32.603758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.652 [2024-07-12 11:28:32.604191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.652 [2024-07-12 11:28:32.604234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.652 [2024-07-12 11:28:32.604250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.652 [2024-07-12 11:28:32.604481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.652 [2024-07-12 11:28:32.604688] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.652 [2024-07-12 11:28:32.604707] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.652 [2024-07-12 11:28:32.604719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.652 [2024-07-12 11:28:32.607649] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.652 [2024-07-12 11:28:32.616777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.652 [2024-07-12 11:28:32.617160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.652 [2024-07-12 11:28:32.617201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.652 [2024-07-12 11:28:32.617216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.652 [2024-07-12 11:28:32.617444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.652 [2024-07-12 11:28:32.617637] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.652 [2024-07-12 11:28:32.617655] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.652 [2024-07-12 11:28:32.617667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.652 [2024-07-12 11:28:32.620619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.652 [2024-07-12 11:28:32.629995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.652 [2024-07-12 11:28:32.630381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.652 [2024-07-12 11:28:32.630424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.652 [2024-07-12 11:28:32.630440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.653 [2024-07-12 11:28:32.630708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.653 [2024-07-12 11:28:32.630941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.653 [2024-07-12 11:28:32.630961] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.653 [2024-07-12 11:28:32.630974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.653 [2024-07-12 11:28:32.633926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.653 [2024-07-12 11:28:32.643338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.653 [2024-07-12 11:28:32.643735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.653 [2024-07-12 11:28:32.643762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.653 [2024-07-12 11:28:32.643778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.653 [2024-07-12 11:28:32.644044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.653 [2024-07-12 11:28:32.644275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.653 [2024-07-12 11:28:32.644293] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.653 [2024-07-12 11:28:32.644305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.653 [2024-07-12 11:28:32.647383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.653 [2024-07-12 11:28:32.656714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.653 [2024-07-12 11:28:32.657096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.653 [2024-07-12 11:28:32.657125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.653 [2024-07-12 11:28:32.657141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.653 [2024-07-12 11:28:32.657373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.653 [2024-07-12 11:28:32.657587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.653 [2024-07-12 11:28:32.657607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.653 [2024-07-12 11:28:32.657619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.653 [2024-07-12 11:28:32.660715] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.653 [2024-07-12 11:28:32.670079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.653 [2024-07-12 11:28:32.670472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.653 [2024-07-12 11:28:32.670501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.653 [2024-07-12 11:28:32.670522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.653 [2024-07-12 11:28:32.670763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.653 [2024-07-12 11:28:32.671007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.653 [2024-07-12 11:28:32.671027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.653 [2024-07-12 11:28:32.671041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.653 [2024-07-12 11:28:32.674018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.653 [2024-07-12 11:28:32.683439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.653 [2024-07-12 11:28:32.683864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.653 [2024-07-12 11:28:32.683898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.653 [2024-07-12 11:28:32.683915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.653 [2024-07-12 11:28:32.684129] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.653 [2024-07-12 11:28:32.684359] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.653 [2024-07-12 11:28:32.684378] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.653 [2024-07-12 11:28:32.684391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.653 [2024-07-12 11:28:32.687471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.653 [2024-07-12 11:28:32.696811] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.653 [2024-07-12 11:28:32.697186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.653 [2024-07-12 11:28:32.697215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.653 [2024-07-12 11:28:32.697232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.653 [2024-07-12 11:28:32.697460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.653 [2024-07-12 11:28:32.697673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.653 [2024-07-12 11:28:32.697692] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.653 [2024-07-12 11:28:32.697704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.653 [2024-07-12 11:28:32.700749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.653 [2024-07-12 11:28:32.710187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.653 [2024-07-12 11:28:32.710526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.653 [2024-07-12 11:28:32.710554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.653 [2024-07-12 11:28:32.710570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.653 [2024-07-12 11:28:32.710798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.653 [2024-07-12 11:28:32.711045] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.653 [2024-07-12 11:28:32.711067] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.653 [2024-07-12 11:28:32.711087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.653 [2024-07-12 11:28:32.714066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.653 [2024-07-12 11:28:32.723487] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.653 [2024-07-12 11:28:32.723861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.653 [2024-07-12 11:28:32.723905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.653 [2024-07-12 11:28:32.723923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.653 [2024-07-12 11:28:32.724157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.653 [2024-07-12 11:28:32.724372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.653 [2024-07-12 11:28:32.724391] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.653 [2024-07-12 11:28:32.724403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.653 [2024-07-12 11:28:32.727421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.653 [2024-07-12 11:28:32.736825] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.653 [2024-07-12 11:28:32.737204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.653 [2024-07-12 11:28:32.737232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.653 [2024-07-12 11:28:32.737249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.653 [2024-07-12 11:28:32.737480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.653 [2024-07-12 11:28:32.737694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.653 [2024-07-12 11:28:32.737713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.653 [2024-07-12 11:28:32.737725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.653 [2024-07-12 11:28:32.740735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.653 [2024-07-12 11:28:32.750087] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.653 [2024-07-12 11:28:32.750497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.653 [2024-07-12 11:28:32.750524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.653 [2024-07-12 11:28:32.750540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.653 [2024-07-12 11:28:32.750763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.653 [2024-07-12 11:28:32.751011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.653 [2024-07-12 11:28:32.751032] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.653 [2024-07-12 11:28:32.751045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.653 [2024-07-12 11:28:32.754070] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.653 [2024-07-12 11:28:32.763518] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.653 [2024-07-12 11:28:32.763942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.653 [2024-07-12 11:28:32.763973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.653 [2024-07-12 11:28:32.763989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.653 [2024-07-12 11:28:32.764216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.653 [2024-07-12 11:28:32.764430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.653 [2024-07-12 11:28:32.764449] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.653 [2024-07-12 11:28:32.764462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.653 [2024-07-12 11:28:32.767441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.653 [2024-07-12 11:28:32.776843] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.653 [2024-07-12 11:28:32.777246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.653 [2024-07-12 11:28:32.777274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.653 [2024-07-12 11:28:32.777290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.654 [2024-07-12 11:28:32.777533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.654 [2024-07-12 11:28:32.777731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.654 [2024-07-12 11:28:32.777749] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.654 [2024-07-12 11:28:32.777762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.654 [2024-07-12 11:28:32.781029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.912 [2024-07-12 11:28:32.790415] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.913 [2024-07-12 11:28:32.790857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.913 [2024-07-12 11:28:32.790892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.913 [2024-07-12 11:28:32.790910] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.913 [2024-07-12 11:28:32.791153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.913 [2024-07-12 11:28:32.791367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.913 [2024-07-12 11:28:32.791386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.913 [2024-07-12 11:28:32.791398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.913 [2024-07-12 11:28:32.794386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.913 [2024-07-12 11:28:32.803666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.913 [2024-07-12 11:28:32.804058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.913 [2024-07-12 11:28:32.804087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.913 [2024-07-12 11:28:32.804103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.913 [2024-07-12 11:28:32.804350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.913 [2024-07-12 11:28:32.804547] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.913 [2024-07-12 11:28:32.804566] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.913 [2024-07-12 11:28:32.804578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.913 [2024-07-12 11:28:32.807559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.913 [2024-07-12 11:28:32.816840] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.913 [2024-07-12 11:28:32.817250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.913 [2024-07-12 11:28:32.817278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.913 [2024-07-12 11:28:32.817295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.913 [2024-07-12 11:28:32.817536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.913 [2024-07-12 11:28:32.817734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.913 [2024-07-12 11:28:32.817753] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.913 [2024-07-12 11:28:32.817765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.913 [2024-07-12 11:28:32.820744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.913 [2024-07-12 11:28:32.830075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.913 [2024-07-12 11:28:32.830541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.913 [2024-07-12 11:28:32.830569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.913 [2024-07-12 11:28:32.830585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.913 [2024-07-12 11:28:32.830827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.913 [2024-07-12 11:28:32.831057] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.913 [2024-07-12 11:28:32.831079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.913 [2024-07-12 11:28:32.831092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.913 [2024-07-12 11:28:32.834059] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.913 [2024-07-12 11:28:32.843342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.913 [2024-07-12 11:28:32.843749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.913 [2024-07-12 11:28:32.843775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.913 [2024-07-12 11:28:32.843790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.913 [2024-07-12 11:28:32.844075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.913 [2024-07-12 11:28:32.844291] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.913 [2024-07-12 11:28:32.844310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.913 [2024-07-12 11:28:32.844322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.913 [2024-07-12 11:28:32.847336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.913 [2024-07-12 11:28:32.856563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.913 [2024-07-12 11:28:32.856936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.913 [2024-07-12 11:28:32.856965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.913 [2024-07-12 11:28:32.856981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.913 [2024-07-12 11:28:32.857223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.913 [2024-07-12 11:28:32.857421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.913 [2024-07-12 11:28:32.857439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.913 [2024-07-12 11:28:32.857451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.913 [2024-07-12 11:28:32.860495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.913 [2024-07-12 11:28:32.869811] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.913 [2024-07-12 11:28:32.870165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.913 [2024-07-12 11:28:32.870194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.913 [2024-07-12 11:28:32.870225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.913 [2024-07-12 11:28:32.870445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.913 [2024-07-12 11:28:32.870658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.913 [2024-07-12 11:28:32.870677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.913 [2024-07-12 11:28:32.870689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.913 [2024-07-12 11:28:32.873667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.913 [2024-07-12 11:28:32.883083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.913 [2024-07-12 11:28:32.883489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.913 [2024-07-12 11:28:32.883517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.913 [2024-07-12 11:28:32.883533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.913 [2024-07-12 11:28:32.883775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.913 [2024-07-12 11:28:32.884025] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.913 [2024-07-12 11:28:32.884047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.913 [2024-07-12 11:28:32.884060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.913 [2024-07-12 11:28:32.887055] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.913 [2024-07-12 11:28:32.896261] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.913 [2024-07-12 11:28:32.896635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.913 [2024-07-12 11:28:32.896667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.913 [2024-07-12 11:28:32.896684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.913 [2024-07-12 11:28:32.896938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.913 [2024-07-12 11:28:32.897143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.913 [2024-07-12 11:28:32.897176] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.913 [2024-07-12 11:28:32.897189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.913 [2024-07-12 11:28:32.900145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.913 [2024-07-12 11:28:32.909557] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.913 [2024-07-12 11:28:32.909969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.913 [2024-07-12 11:28:32.909996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.913 [2024-07-12 11:28:32.910028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.913 [2024-07-12 11:28:32.910271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.913 [2024-07-12 11:28:32.910485] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.913 [2024-07-12 11:28:32.910504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.913 [2024-07-12 11:28:32.910517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.913 [2024-07-12 11:28:32.913500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.913 [2024-07-12 11:28:32.922698] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.913 [2024-07-12 11:28:32.923094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.913 [2024-07-12 11:28:32.923122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.913 [2024-07-12 11:28:32.923138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.913 [2024-07-12 11:28:32.923380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.913 [2024-07-12 11:28:32.923593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.913 [2024-07-12 11:28:32.923611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.913 [2024-07-12 11:28:32.923623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.913 [2024-07-12 11:28:32.926664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.914 [2024-07-12 11:28:32.936171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.914 [2024-07-12 11:28:32.936631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.914 [2024-07-12 11:28:32.936660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.914 [2024-07-12 11:28:32.936676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.914 [2024-07-12 11:28:32.936919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.914 [2024-07-12 11:28:32.937136] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.914 [2024-07-12 11:28:32.937172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.914 [2024-07-12 11:28:32.937185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.914 [2024-07-12 11:28:32.940214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.914 [2024-07-12 11:28:32.949440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.914 [2024-07-12 11:28:32.949884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.914 [2024-07-12 11:28:32.949912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.914 [2024-07-12 11:28:32.949928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.914 [2024-07-12 11:28:32.950142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.914 [2024-07-12 11:28:32.950355] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.914 [2024-07-12 11:28:32.950374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.914 [2024-07-12 11:28:32.950386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.914 [2024-07-12 11:28:32.953364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.914 [2024-07-12 11:28:32.962844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.914 [2024-07-12 11:28:32.963187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.914 [2024-07-12 11:28:32.963231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.914 [2024-07-12 11:28:32.963247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.914 [2024-07-12 11:28:32.963485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.914 [2024-07-12 11:28:32.963698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.914 [2024-07-12 11:28:32.963717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.914 [2024-07-12 11:28:32.963729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.914 [2024-07-12 11:28:32.966709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.914 [2024-07-12 11:28:32.976172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.914 [2024-07-12 11:28:32.976560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.914 [2024-07-12 11:28:32.976602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.914 [2024-07-12 11:28:32.976617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.914 [2024-07-12 11:28:32.976852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.914 [2024-07-12 11:28:32.977070] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.914 [2024-07-12 11:28:32.977091] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.914 [2024-07-12 11:28:32.977103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.914 [2024-07-12 11:28:32.980072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.914 [2024-07-12 11:28:32.989496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.914 [2024-07-12 11:28:32.989841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.914 [2024-07-12 11:28:32.989874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.914 [2024-07-12 11:28:32.989892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.914 [2024-07-12 11:28:32.990120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.914 [2024-07-12 11:28:32.990336] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.914 [2024-07-12 11:28:32.990355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.914 [2024-07-12 11:28:32.990368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.914 [2024-07-12 11:28:32.993345] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.914 [2024-07-12 11:28:33.002743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.914 [2024-07-12 11:28:33.003145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.914 [2024-07-12 11:28:33.003173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.914 [2024-07-12 11:28:33.003189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.914 [2024-07-12 11:28:33.003431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.914 [2024-07-12 11:28:33.003645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.914 [2024-07-12 11:28:33.003664] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.914 [2024-07-12 11:28:33.003677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.914 [2024-07-12 11:28:33.006665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.914 [2024-07-12 11:28:33.016046] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.914 [2024-07-12 11:28:33.016439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.914 [2024-07-12 11:28:33.016466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.914 [2024-07-12 11:28:33.016482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.914 [2024-07-12 11:28:33.016716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.914 [2024-07-12 11:28:33.016946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.914 [2024-07-12 11:28:33.016967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.914 [2024-07-12 11:28:33.016980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.914 [2024-07-12 11:28:33.019949] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.914 [2024-07-12 11:28:33.029399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.914 [2024-07-12 11:28:33.029724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.914 [2024-07-12 11:28:33.029750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.914 [2024-07-12 11:28:33.029770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.914 [2024-07-12 11:28:33.030017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.914 [2024-07-12 11:28:33.030236] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.914 [2024-07-12 11:28:33.030256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.914 [2024-07-12 11:28:33.030268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.914 [2024-07-12 11:28:33.033239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:06.914 [2024-07-12 11:28:33.043141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.914 [2024-07-12 11:28:33.043522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.914 [2024-07-12 11:28:33.043565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:06.914 [2024-07-12 11:28:33.043581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:06.914 [2024-07-12 11:28:33.043841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:06.914 [2024-07-12 11:28:33.044117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.914 [2024-07-12 11:28:33.044140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:06.914 [2024-07-12 11:28:33.044154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.173 [2024-07-12 11:28:33.047338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.173 [2024-07-12 11:28:33.056370] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.173 [2024-07-12 11:28:33.056769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.173 [2024-07-12 11:28:33.056811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.173 [2024-07-12 11:28:33.056828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.173 [2024-07-12 11:28:33.057072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.173 [2024-07-12 11:28:33.057305] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.173 [2024-07-12 11:28:33.057324] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.173 [2024-07-12 11:28:33.057337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.173 [2024-07-12 11:28:33.060269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.173 [2024-07-12 11:28:33.069574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.173 [2024-07-12 11:28:33.069944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.173 [2024-07-12 11:28:33.069973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.173 [2024-07-12 11:28:33.069990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.173 [2024-07-12 11:28:33.070232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.173 [2024-07-12 11:28:33.070429] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.173 [2024-07-12 11:28:33.070448] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.173 [2024-07-12 11:28:33.070465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.173 [2024-07-12 11:28:33.073442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.173 [2024-07-12 11:28:33.082813] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.173 [2024-07-12 11:28:33.083169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.173 [2024-07-12 11:28:33.083197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.173 [2024-07-12 11:28:33.083214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.173 [2024-07-12 11:28:33.083443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.173 [2024-07-12 11:28:33.083677] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.173 [2024-07-12 11:28:33.083696] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.173 [2024-07-12 11:28:33.083708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.173 [2024-07-12 11:28:33.086704] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.173 [2024-07-12 11:28:33.096186] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.173 [2024-07-12 11:28:33.096560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.173 [2024-07-12 11:28:33.096602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.173 [2024-07-12 11:28:33.096618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.173 [2024-07-12 11:28:33.096881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.173 [2024-07-12 11:28:33.097102] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.173 [2024-07-12 11:28:33.097122] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.173 [2024-07-12 11:28:33.097135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.173 [2024-07-12 11:28:33.100107] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.173 [2024-07-12 11:28:33.109492] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.173 [2024-07-12 11:28:33.109931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.173 [2024-07-12 11:28:33.109959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.173 [2024-07-12 11:28:33.109975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.173 [2024-07-12 11:28:33.110204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.173 [2024-07-12 11:28:33.110435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.173 [2024-07-12 11:28:33.110454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.173 [2024-07-12 11:28:33.110467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.173 [2024-07-12 11:28:33.113469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.173 [2024-07-12 11:28:33.122714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.173 [2024-07-12 11:28:33.123121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.173 [2024-07-12 11:28:33.123149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.174 [2024-07-12 11:28:33.123165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.174 [2024-07-12 11:28:33.123407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.174 [2024-07-12 11:28:33.123619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.174 [2024-07-12 11:28:33.123639] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.174 [2024-07-12 11:28:33.123651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.174 [2024-07-12 11:28:33.126675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.174 [2024-07-12 11:28:33.135965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.174 [2024-07-12 11:28:33.136410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.174 [2024-07-12 11:28:33.136438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.174 [2024-07-12 11:28:33.136454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.174 [2024-07-12 11:28:33.136699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.174 [2024-07-12 11:28:33.136921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.174 [2024-07-12 11:28:33.136941] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.174 [2024-07-12 11:28:33.136954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.174 [2024-07-12 11:28:33.139902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.174 [2024-07-12 11:28:33.149135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.174 [2024-07-12 11:28:33.149566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.174 [2024-07-12 11:28:33.149594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.174 [2024-07-12 11:28:33.149610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.174 [2024-07-12 11:28:33.149850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.174 [2024-07-12 11:28:33.150061] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.174 [2024-07-12 11:28:33.150081] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.174 [2024-07-12 11:28:33.150093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.174 [2024-07-12 11:28:33.153033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.174 [2024-07-12 11:28:33.162490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.174 [2024-07-12 11:28:33.162876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.174 [2024-07-12 11:28:33.162903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.174 [2024-07-12 11:28:33.162934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.174 [2024-07-12 11:28:33.163180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.174 [2024-07-12 11:28:33.163389] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.174 [2024-07-12 11:28:33.163408] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.174 [2024-07-12 11:28:33.163420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.174 [2024-07-12 11:28:33.166408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.174 [2024-07-12 11:28:33.175768] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.174 [2024-07-12 11:28:33.176210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.174 [2024-07-12 11:28:33.176238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.174 [2024-07-12 11:28:33.176254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.174 [2024-07-12 11:28:33.176495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.174 [2024-07-12 11:28:33.176702] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.174 [2024-07-12 11:28:33.176721] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.174 [2024-07-12 11:28:33.176733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.174 [2024-07-12 11:28:33.180133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.174 [2024-07-12 11:28:33.189205] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.174 [2024-07-12 11:28:33.189619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.174 [2024-07-12 11:28:33.189647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.174 [2024-07-12 11:28:33.189663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.174 [2024-07-12 11:28:33.189918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.174 [2024-07-12 11:28:33.190158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.174 [2024-07-12 11:28:33.190177] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.174 [2024-07-12 11:28:33.190190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.174 [2024-07-12 11:28:33.193238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.174 [2024-07-12 11:28:33.202432] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.174 [2024-07-12 11:28:33.202847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.174 [2024-07-12 11:28:33.202896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.174 [2024-07-12 11:28:33.202913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.174 [2024-07-12 11:28:33.203155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.174 [2024-07-12 11:28:33.203362] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.174 [2024-07-12 11:28:33.203380] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.174 [2024-07-12 11:28:33.203397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.174 [2024-07-12 11:28:33.206426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.174 [2024-07-12 11:28:33.215668] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.174 [2024-07-12 11:28:33.216107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.174 [2024-07-12 11:28:33.216136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.174 [2024-07-12 11:28:33.216152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.174 [2024-07-12 11:28:33.216393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.174 [2024-07-12 11:28:33.216599] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.174 [2024-07-12 11:28:33.216618] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.174 [2024-07-12 11:28:33.216629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.174 [2024-07-12 11:28:33.219584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.174 [2024-07-12 11:28:33.229004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.174 [2024-07-12 11:28:33.229437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.174 [2024-07-12 11:28:33.229465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.174 [2024-07-12 11:28:33.229481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.174 [2024-07-12 11:28:33.229714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.174 [2024-07-12 11:28:33.229952] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.174 [2024-07-12 11:28:33.229973] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.174 [2024-07-12 11:28:33.229985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.174 [2024-07-12 11:28:33.232927] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.174 [2024-07-12 11:28:33.242187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.174 [2024-07-12 11:28:33.242537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.174 [2024-07-12 11:28:33.242565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.174 [2024-07-12 11:28:33.242581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.174 [2024-07-12 11:28:33.242821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.174 [2024-07-12 11:28:33.243050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.174 [2024-07-12 11:28:33.243071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.174 [2024-07-12 11:28:33.243084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.174 [2024-07-12 11:28:33.246119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.174 [2024-07-12 11:28:33.255352] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.174 [2024-07-12 11:28:33.255703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.174 [2024-07-12 11:28:33.255735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.174 [2024-07-12 11:28:33.255751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.174 [2024-07-12 11:28:33.256017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.174 [2024-07-12 11:28:33.256249] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.174 [2024-07-12 11:28:33.256267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.174 [2024-07-12 11:28:33.256279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.174 [2024-07-12 11:28:33.259232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.174 [2024-07-12 11:28:33.268698] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.174 [2024-07-12 11:28:33.269040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.174 [2024-07-12 11:28:33.269067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.175 [2024-07-12 11:28:33.269083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.175 [2024-07-12 11:28:33.269315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.175 [2024-07-12 11:28:33.269513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.175 [2024-07-12 11:28:33.269531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.175 [2024-07-12 11:28:33.269544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.175 [2024-07-12 11:28:33.272589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.175 [2024-07-12 11:28:33.281951] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.175 [2024-07-12 11:28:33.282344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.175 [2024-07-12 11:28:33.282371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.175 [2024-07-12 11:28:33.282386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.175 [2024-07-12 11:28:33.282622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.175 [2024-07-12 11:28:33.282830] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.175 [2024-07-12 11:28:33.282864] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.175 [2024-07-12 11:28:33.282894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.175 [2024-07-12 11:28:33.285915] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.175 [2024-07-12 11:28:33.295093] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.175 [2024-07-12 11:28:33.295407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.175 [2024-07-12 11:28:33.295433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.175 [2024-07-12 11:28:33.295448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.175 [2024-07-12 11:28:33.295648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.175 [2024-07-12 11:28:33.295904] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.175 [2024-07-12 11:28:33.295925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.175 [2024-07-12 11:28:33.295937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.175 [2024-07-12 11:28:33.298859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.434 [2024-07-12 11:28:33.308425] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.434 [2024-07-12 11:28:33.308764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.434 [2024-07-12 11:28:33.308793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.434 [2024-07-12 11:28:33.308810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.434 [2024-07-12 11:28:33.309054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.434 [2024-07-12 11:28:33.309317] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.434 [2024-07-12 11:28:33.309338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.434 [2024-07-12 11:28:33.309351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.434 [2024-07-12 11:28:33.312534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.434 [2024-07-12 11:28:33.321593] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.434 [2024-07-12 11:28:33.322007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.434 [2024-07-12 11:28:33.322036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.434 [2024-07-12 11:28:33.322053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.434 [2024-07-12 11:28:33.322293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.434 [2024-07-12 11:28:33.322501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.434 [2024-07-12 11:28:33.322519] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.434 [2024-07-12 11:28:33.322531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.434 [2024-07-12 11:28:33.325483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.434 [2024-07-12 11:28:33.334906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.434 [2024-07-12 11:28:33.335292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.434 [2024-07-12 11:28:33.335320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.434 [2024-07-12 11:28:33.335336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.434 [2024-07-12 11:28:33.335581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.434 [2024-07-12 11:28:33.335795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.434 [2024-07-12 11:28:33.335814] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.434 [2024-07-12 11:28:33.335826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.434 [2024-07-12 11:28:33.338873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.434 [2024-07-12 11:28:33.348128] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.434 [2024-07-12 11:28:33.348560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.434 [2024-07-12 11:28:33.348588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.434 [2024-07-12 11:28:33.348604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.434 [2024-07-12 11:28:33.348825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.434 [2024-07-12 11:28:33.349044] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.434 [2024-07-12 11:28:33.349064] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.434 [2024-07-12 11:28:33.349076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.434 [2024-07-12 11:28:33.352004] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.434 [2024-07-12 11:28:33.361413] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.434 [2024-07-12 11:28:33.361834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.434 [2024-07-12 11:28:33.361885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.434 [2024-07-12 11:28:33.361902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.434 [2024-07-12 11:28:33.362140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.434 [2024-07-12 11:28:33.362347] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.434 [2024-07-12 11:28:33.362364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.434 [2024-07-12 11:28:33.362377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.434 [2024-07-12 11:28:33.365378] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.434 [2024-07-12 11:28:33.374531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.434 [2024-07-12 11:28:33.374883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.434 [2024-07-12 11:28:33.374911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.434 [2024-07-12 11:28:33.374927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.434 [2024-07-12 11:28:33.375168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.434 [2024-07-12 11:28:33.375376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.434 [2024-07-12 11:28:33.375394] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.434 [2024-07-12 11:28:33.375406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.434 [2024-07-12 11:28:33.378358] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.434 [2024-07-12 11:28:33.387825] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.434 [2024-07-12 11:28:33.388222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.434 [2024-07-12 11:28:33.388251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.434 [2024-07-12 11:28:33.388275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.434 [2024-07-12 11:28:33.388522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.434 [2024-07-12 11:28:33.388715] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.434 [2024-07-12 11:28:33.388733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.434 [2024-07-12 11:28:33.388745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.434 [2024-07-12 11:28:33.391749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.434 [2024-07-12 11:28:33.401161] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.434 [2024-07-12 11:28:33.401591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.434 [2024-07-12 11:28:33.401627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.434 [2024-07-12 11:28:33.401643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.434 [2024-07-12 11:28:33.401895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.434 [2024-07-12 11:28:33.402093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.434 [2024-07-12 11:28:33.402112] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.434 [2024-07-12 11:28:33.402124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.434 [2024-07-12 11:28:33.405076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.434 [2024-07-12 11:28:33.414461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.434 [2024-07-12 11:28:33.414812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.434 [2024-07-12 11:28:33.414839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.434 [2024-07-12 11:28:33.414877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.434 [2024-07-12 11:28:33.415119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.434 [2024-07-12 11:28:33.415328] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.434 [2024-07-12 11:28:33.415347] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.434 [2024-07-12 11:28:33.415359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.434 [2024-07-12 11:28:33.418316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.434 [2024-07-12 11:28:33.427674] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.434 [2024-07-12 11:28:33.428071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.434 [2024-07-12 11:28:33.428099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.434 [2024-07-12 11:28:33.428116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.434 [2024-07-12 11:28:33.428343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.434 [2024-07-12 11:28:33.428574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.434 [2024-07-12 11:28:33.428600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.434 [2024-07-12 11:28:33.428615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.434 [2024-07-12 11:28:33.432013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.434 [2024-07-12 11:28:33.441009] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.434 [2024-07-12 11:28:33.441458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.434 [2024-07-12 11:28:33.441486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.434 [2024-07-12 11:28:33.441502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.435 [2024-07-12 11:28:33.441743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.435 [2024-07-12 11:28:33.441983] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.435 [2024-07-12 11:28:33.442003] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.435 [2024-07-12 11:28:33.442016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.435 [2024-07-12 11:28:33.445028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.435 [2024-07-12 11:28:33.454370] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.435 [2024-07-12 11:28:33.454782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.435 [2024-07-12 11:28:33.454810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.435 [2024-07-12 11:28:33.454826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.435 [2024-07-12 11:28:33.455075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.435 [2024-07-12 11:28:33.455283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.435 [2024-07-12 11:28:33.455302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.435 [2024-07-12 11:28:33.455314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.435 [2024-07-12 11:28:33.458270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.435 [2024-07-12 11:28:33.467740] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.435 [2024-07-12 11:28:33.468120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.435 [2024-07-12 11:28:33.468156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.435 [2024-07-12 11:28:33.468172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.435 [2024-07-12 11:28:33.468415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.435 [2024-07-12 11:28:33.468606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.435 [2024-07-12 11:28:33.468625] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.435 [2024-07-12 11:28:33.468636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.435 [2024-07-12 11:28:33.471616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.435 [2024-07-12 11:28:33.481073] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.435 [2024-07-12 11:28:33.481512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.435 [2024-07-12 11:28:33.481540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.435 [2024-07-12 11:28:33.481555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.435 [2024-07-12 11:28:33.481794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.435 [2024-07-12 11:28:33.482029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.435 [2024-07-12 11:28:33.482049] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.435 [2024-07-12 11:28:33.482061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.435 [2024-07-12 11:28:33.485054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.435 [2024-07-12 11:28:33.494302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.435 [2024-07-12 11:28:33.494654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.435 [2024-07-12 11:28:33.494681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.435 [2024-07-12 11:28:33.494697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.435 [2024-07-12 11:28:33.494941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.435 [2024-07-12 11:28:33.495139] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.435 [2024-07-12 11:28:33.495158] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.435 [2024-07-12 11:28:33.495170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.435 [2024-07-12 11:28:33.498126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.435 [2024-07-12 11:28:33.507530] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.435 [2024-07-12 11:28:33.507885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.435 [2024-07-12 11:28:33.507927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.435 [2024-07-12 11:28:33.507944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.435 [2024-07-12 11:28:33.508164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.435 [2024-07-12 11:28:33.508393] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.435 [2024-07-12 11:28:33.508412] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.435 [2024-07-12 11:28:33.508423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.435 [2024-07-12 11:28:33.511414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.435 [2024-07-12 11:28:33.520845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.435 [2024-07-12 11:28:33.521267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.435 [2024-07-12 11:28:33.521296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.435 [2024-07-12 11:28:33.521312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.435 [2024-07-12 11:28:33.521558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.435 [2024-07-12 11:28:33.521765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.435 [2024-07-12 11:28:33.521783] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.435 [2024-07-12 11:28:33.521795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.435 [2024-07-12 11:28:33.524758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.435 [2024-07-12 11:28:33.534009] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.435 [2024-07-12 11:28:33.534407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.435 [2024-07-12 11:28:33.534435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.435 [2024-07-12 11:28:33.534451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.435 [2024-07-12 11:28:33.534672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.435 [2024-07-12 11:28:33.534905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.435 [2024-07-12 11:28:33.534925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.435 [2024-07-12 11:28:33.534937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.435 [2024-07-12 11:28:33.537993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.435 [2024-07-12 11:28:33.547319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.435 [2024-07-12 11:28:33.547801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.435 [2024-07-12 11:28:33.547853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.435 [2024-07-12 11:28:33.547879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.435 [2024-07-12 11:28:33.548144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.435 [2024-07-12 11:28:33.548370] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.435 [2024-07-12 11:28:33.548388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.435 [2024-07-12 11:28:33.548401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.435 [2024-07-12 11:28:33.551351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.435 [2024-07-12 11:28:33.560592] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.435 [2024-07-12 11:28:33.560962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.435 [2024-07-12 11:28:33.560991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.435 [2024-07-12 11:28:33.561008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.435 [2024-07-12 11:28:33.561244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.435 [2024-07-12 11:28:33.561497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.435 [2024-07-12 11:28:33.561519] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.435 [2024-07-12 11:28:33.561536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.435 [2024-07-12 11:28:33.564876] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.694 [2024-07-12 11:28:33.573886] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.694 [2024-07-12 11:28:33.574282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.694 [2024-07-12 11:28:33.574324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.694 [2024-07-12 11:28:33.574341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.694 [2024-07-12 11:28:33.574564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.694 [2024-07-12 11:28:33.574772] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.694 [2024-07-12 11:28:33.574790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.694 [2024-07-12 11:28:33.574802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.694 [2024-07-12 11:28:33.577699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.694 [2024-07-12 11:28:33.587018] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.694 [2024-07-12 11:28:33.587373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.694 [2024-07-12 11:28:33.587401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.694 [2024-07-12 11:28:33.587417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.694 [2024-07-12 11:28:33.587638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.694 [2024-07-12 11:28:33.587861] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.694 [2024-07-12 11:28:33.587894] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.694 [2024-07-12 11:28:33.587908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.694 [2024-07-12 11:28:33.590700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.694 [2024-07-12 11:28:33.600134] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.694 [2024-07-12 11:28:33.600499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.694 [2024-07-12 11:28:33.600526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.694 [2024-07-12 11:28:33.600541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.694 [2024-07-12 11:28:33.600776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.694 [2024-07-12 11:28:33.600997] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.694 [2024-07-12 11:28:33.601017] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.694 [2024-07-12 11:28:33.601030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.694 [2024-07-12 11:28:33.603819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.694 [2024-07-12 11:28:33.613176] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.694 [2024-07-12 11:28:33.613543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.694 [2024-07-12 11:28:33.613575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.694 [2024-07-12 11:28:33.613591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.694 [2024-07-12 11:28:33.613829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.694 [2024-07-12 11:28:33.614067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.694 [2024-07-12 11:28:33.614087] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.694 [2024-07-12 11:28:33.614100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.694 [2024-07-12 11:28:33.616987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.694 [2024-07-12 11:28:33.626302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.694 [2024-07-12 11:28:33.626793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.694 [2024-07-12 11:28:33.626834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.694 [2024-07-12 11:28:33.626851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.694 [2024-07-12 11:28:33.627089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.694 [2024-07-12 11:28:33.627318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.694 [2024-07-12 11:28:33.627336] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.694 [2024-07-12 11:28:33.627348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.694 [2024-07-12 11:28:33.630261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.694 [2024-07-12 11:28:33.639324] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.694 [2024-07-12 11:28:33.639693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.694 [2024-07-12 11:28:33.639735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.694 [2024-07-12 11:28:33.639750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.694 [2024-07-12 11:28:33.640031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.694 [2024-07-12 11:28:33.640231] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.694 [2024-07-12 11:28:33.640250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.694 [2024-07-12 11:28:33.640262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.694 [2024-07-12 11:28:33.643152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.694 [2024-07-12 11:28:33.652499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.694 [2024-07-12 11:28:33.652870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.694 [2024-07-12 11:28:33.652898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.694 [2024-07-12 11:28:33.652914] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.694 [2024-07-12 11:28:33.653154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.694 [2024-07-12 11:28:33.653368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.694 [2024-07-12 11:28:33.653386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.694 [2024-07-12 11:28:33.653399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.694 [2024-07-12 11:28:33.656213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.694 [2024-07-12 11:28:33.665479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.694 [2024-07-12 11:28:33.665920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.694 [2024-07-12 11:28:33.665963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.694 [2024-07-12 11:28:33.665980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.694 [2024-07-12 11:28:33.666219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.694 [2024-07-12 11:28:33.666426] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.694 [2024-07-12 11:28:33.666445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.694 [2024-07-12 11:28:33.666456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.694 [2024-07-12 11:28:33.669352] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.694 [2024-07-12 11:28:33.678652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.694 [2024-07-12 11:28:33.679051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.694 [2024-07-12 11:28:33.679080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.695 [2024-07-12 11:28:33.679096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.695 [2024-07-12 11:28:33.679329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.695 [2024-07-12 11:28:33.679578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.695 [2024-07-12 11:28:33.679599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.695 [2024-07-12 11:28:33.679612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.695 [2024-07-12 11:28:33.683067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.695 [2024-07-12 11:28:33.691891] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.695 [2024-07-12 11:28:33.692440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.695 [2024-07-12 11:28:33.692492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.695 [2024-07-12 11:28:33.692508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.695 [2024-07-12 11:28:33.692753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.695 [2024-07-12 11:28:33.693041] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.695 [2024-07-12 11:28:33.693061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.695 [2024-07-12 11:28:33.693074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.695 [2024-07-12 11:28:33.695960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.695 [2024-07-12 11:28:33.704886] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.695 [2024-07-12 11:28:33.705316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.695 [2024-07-12 11:28:33.705358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.695 [2024-07-12 11:28:33.705374] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.695 [2024-07-12 11:28:33.705616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.695 [2024-07-12 11:28:33.705822] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.695 [2024-07-12 11:28:33.705841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.695 [2024-07-12 11:28:33.705879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.695 [2024-07-12 11:28:33.708796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.695 [2024-07-12 11:28:33.717995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.695 [2024-07-12 11:28:33.718378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.695 [2024-07-12 11:28:33.718404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.695 [2024-07-12 11:28:33.718419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.695 [2024-07-12 11:28:33.718632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.695 [2024-07-12 11:28:33.718839] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.695 [2024-07-12 11:28:33.718885] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.695 [2024-07-12 11:28:33.718902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.695 [2024-07-12 11:28:33.721741] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.695 [2024-07-12 11:28:33.731094] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.695 [2024-07-12 11:28:33.731430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.695 [2024-07-12 11:28:33.731457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.695 [2024-07-12 11:28:33.731473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.695 [2024-07-12 11:28:33.731694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.695 [2024-07-12 11:28:33.731930] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.695 [2024-07-12 11:28:33.731950] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.695 [2024-07-12 11:28:33.731963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.695 [2024-07-12 11:28:33.734828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.695 [2024-07-12 11:28:33.744140] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.695 [2024-07-12 11:28:33.744540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.695 [2024-07-12 11:28:33.744565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.695 [2024-07-12 11:28:33.744584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.695 [2024-07-12 11:28:33.744815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.695 [2024-07-12 11:28:33.745054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.695 [2024-07-12 11:28:33.745074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.695 [2024-07-12 11:28:33.745087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.695 [2024-07-12 11:28:33.748013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.695 [2024-07-12 11:28:33.757327] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.695 [2024-07-12 11:28:33.757694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.695 [2024-07-12 11:28:33.757722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.695 [2024-07-12 11:28:33.757738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.695 [2024-07-12 11:28:33.757994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.695 [2024-07-12 11:28:33.758193] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.695 [2024-07-12 11:28:33.758212] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.695 [2024-07-12 11:28:33.758224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.695 [2024-07-12 11:28:33.761048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.695 [2024-07-12 11:28:33.770311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.695 [2024-07-12 11:28:33.770814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.695 [2024-07-12 11:28:33.770841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.695 [2024-07-12 11:28:33.770886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.695 [2024-07-12 11:28:33.771153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.695 [2024-07-12 11:28:33.771345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.695 [2024-07-12 11:28:33.771363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.695 [2024-07-12 11:28:33.771375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.695 [2024-07-12 11:28:33.774236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.695 [2024-07-12 11:28:33.783511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.695 [2024-07-12 11:28:33.783838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.695 [2024-07-12 11:28:33.783872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.695 [2024-07-12 11:28:33.783910] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.695 [2024-07-12 11:28:33.784118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.695 [2024-07-12 11:28:33.784343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.695 [2024-07-12 11:28:33.784366] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.695 [2024-07-12 11:28:33.784379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.695 [2024-07-12 11:28:33.787229] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.695 [2024-07-12 11:28:33.796498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.695 [2024-07-12 11:28:33.796893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.695 [2024-07-12 11:28:33.796920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.695 [2024-07-12 11:28:33.796936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.695 [2024-07-12 11:28:33.797157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.695 [2024-07-12 11:28:33.797381] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.695 [2024-07-12 11:28:33.797399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.695 [2024-07-12 11:28:33.797411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.695 [2024-07-12 11:28:33.800189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.695 [2024-07-12 11:28:33.809618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.695 [2024-07-12 11:28:33.809998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.695 [2024-07-12 11:28:33.810040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.695 [2024-07-12 11:28:33.810056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.695 [2024-07-12 11:28:33.810281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.695 [2024-07-12 11:28:33.810489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.695 [2024-07-12 11:28:33.810507] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.695 [2024-07-12 11:28:33.810519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.695 [2024-07-12 11:28:33.813455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.695 [2024-07-12 11:28:33.823072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.695 [2024-07-12 11:28:33.823586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.696 [2024-07-12 11:28:33.823629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.696 [2024-07-12 11:28:33.823653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.696 [2024-07-12 11:28:33.823909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.696 [2024-07-12 11:28:33.824150] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.696 [2024-07-12 11:28:33.824172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.696 [2024-07-12 11:28:33.824201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.953 [2024-07-12 11:28:33.827443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.953 [2024-07-12 11:28:33.836263] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.953 [2024-07-12 11:28:33.836749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.953 [2024-07-12 11:28:33.836806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.953 [2024-07-12 11:28:33.836822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.953 [2024-07-12 11:28:33.837078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.953 [2024-07-12 11:28:33.837304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.953 [2024-07-12 11:28:33.837323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.953 [2024-07-12 11:28:33.837336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.953 [2024-07-12 11:28:33.840232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.953 [2024-07-12 11:28:33.849403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.953 [2024-07-12 11:28:33.849765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.953 [2024-07-12 11:28:33.849807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.953 [2024-07-12 11:28:33.849823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.953 [2024-07-12 11:28:33.850090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.953 [2024-07-12 11:28:33.850299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.953 [2024-07-12 11:28:33.850318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.953 [2024-07-12 11:28:33.850330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.953 [2024-07-12 11:28:33.853218] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.953 [2024-07-12 11:28:33.862641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.953 [2024-07-12 11:28:33.862980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.953 [2024-07-12 11:28:33.863007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.953 [2024-07-12 11:28:33.863023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.953 [2024-07-12 11:28:33.863244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.953 [2024-07-12 11:28:33.863471] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.953 [2024-07-12 11:28:33.863489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.953 [2024-07-12 11:28:33.863501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.953 [2024-07-12 11:28:33.866397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.953 [2024-07-12 11:28:33.875670] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.953 [2024-07-12 11:28:33.876051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.953 [2024-07-12 11:28:33.876080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.953 [2024-07-12 11:28:33.876096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.953 [2024-07-12 11:28:33.876337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.953 [2024-07-12 11:28:33.876545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.953 [2024-07-12 11:28:33.876564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.953 [2024-07-12 11:28:33.876576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.953 [2024-07-12 11:28:33.879476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.953 [2024-07-12 11:28:33.888763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.953 [2024-07-12 11:28:33.889153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.953 [2024-07-12 11:28:33.889181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.953 [2024-07-12 11:28:33.889212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.953 [2024-07-12 11:28:33.889446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.953 [2024-07-12 11:28:33.889653] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.953 [2024-07-12 11:28:33.889671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.953 [2024-07-12 11:28:33.889683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.953 [2024-07-12 11:28:33.892576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.953 [2024-07-12 11:28:33.901888] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.953 [2024-07-12 11:28:33.902276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.953 [2024-07-12 11:28:33.902316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.953 [2024-07-12 11:28:33.902332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.953 [2024-07-12 11:28:33.902556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.953 [2024-07-12 11:28:33.902766] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.953 [2024-07-12 11:28:33.902784] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.953 [2024-07-12 11:28:33.902796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.953 [2024-07-12 11:28:33.905694] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.953 [2024-07-12 11:28:33.914958] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.953 [2024-07-12 11:28:33.915386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.953 [2024-07-12 11:28:33.915428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.953 [2024-07-12 11:28:33.915445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.953 [2024-07-12 11:28:33.915685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.953 [2024-07-12 11:28:33.915905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.953 [2024-07-12 11:28:33.915925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.953 [2024-07-12 11:28:33.915941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.953 [2024-07-12 11:28:33.918710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.953 [2024-07-12 11:28:33.928020] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.953 [2024-07-12 11:28:33.928334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.953 [2024-07-12 11:28:33.928359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.953 [2024-07-12 11:28:33.928374] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.953 [2024-07-12 11:28:33.928568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.953 [2024-07-12 11:28:33.928776] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.953 [2024-07-12 11:28:33.928794] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.953 [2024-07-12 11:28:33.928806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.953 [2024-07-12 11:28:33.932228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.953 [2024-07-12 11:28:33.941271] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.953 [2024-07-12 11:28:33.941606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.953 [2024-07-12 11:28:33.941633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.953 [2024-07-12 11:28:33.941648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.953 [2024-07-12 11:28:33.941877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.953 [2024-07-12 11:28:33.942103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.953 [2024-07-12 11:28:33.942123] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.953 [2024-07-12 11:28:33.942137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.953 [2024-07-12 11:28:33.945130] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.953 [2024-07-12 11:28:33.954494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.953 [2024-07-12 11:28:33.954859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.953 [2024-07-12 11:28:33.954908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.953 [2024-07-12 11:28:33.954925] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.953 [2024-07-12 11:28:33.955166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.953 [2024-07-12 11:28:33.955373] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.953 [2024-07-12 11:28:33.955391] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.953 [2024-07-12 11:28:33.955403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.953 [2024-07-12 11:28:33.958341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.953 [2024-07-12 11:28:33.967635] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.953 [2024-07-12 11:28:33.968026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.953 [2024-07-12 11:28:33.968059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.953 [2024-07-12 11:28:33.968076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.953 [2024-07-12 11:28:33.968317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.953 [2024-07-12 11:28:33.968524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.953 [2024-07-12 11:28:33.968542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.953 [2024-07-12 11:28:33.968554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.953 [2024-07-12 11:28:33.971370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.953 [2024-07-12 11:28:33.980740] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.953 [2024-07-12 11:28:33.981141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.953 [2024-07-12 11:28:33.981182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.953 [2024-07-12 11:28:33.981198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.953 [2024-07-12 11:28:33.981419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.953 [2024-07-12 11:28:33.981627] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.953 [2024-07-12 11:28:33.981645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.953 [2024-07-12 11:28:33.981657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.953 [2024-07-12 11:28:33.984544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.953 [2024-07-12 11:28:33.993885] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.953 [2024-07-12 11:28:33.994384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.953 [2024-07-12 11:28:33.994411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.953 [2024-07-12 11:28:33.994442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.953 [2024-07-12 11:28:33.994691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.953 [2024-07-12 11:28:33.994928] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.953 [2024-07-12 11:28:33.994949] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.953 [2024-07-12 11:28:33.994962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.953 [2024-07-12 11:28:33.997827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.953 [2024-07-12 11:28:34.007006] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.953 [2024-07-12 11:28:34.007348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.953 [2024-07-12 11:28:34.007375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.954 [2024-07-12 11:28:34.007390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.954 [2024-07-12 11:28:34.007613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.954 [2024-07-12 11:28:34.007825] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.954 [2024-07-12 11:28:34.007843] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.954 [2024-07-12 11:28:34.007884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.954 [2024-07-12 11:28:34.010798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.954 [2024-07-12 11:28:34.020180] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.954 [2024-07-12 11:28:34.020669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.954 [2024-07-12 11:28:34.020697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.954 [2024-07-12 11:28:34.020728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.954 [2024-07-12 11:28:34.020980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.954 [2024-07-12 11:28:34.021195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.954 [2024-07-12 11:28:34.021228] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.954 [2024-07-12 11:28:34.021240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.954 [2024-07-12 11:28:34.024051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.954 [2024-07-12 11:28:34.033213] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.954 [2024-07-12 11:28:34.033572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.954 [2024-07-12 11:28:34.033599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.954 [2024-07-12 11:28:34.033614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.954 [2024-07-12 11:28:34.033828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.954 [2024-07-12 11:28:34.034065] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.954 [2024-07-12 11:28:34.034086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.954 [2024-07-12 11:28:34.034098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.954 [2024-07-12 11:28:34.037002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.954 [2024-07-12 11:28:34.046370] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.954 [2024-07-12 11:28:34.046735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.954 [2024-07-12 11:28:34.046776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.954 [2024-07-12 11:28:34.046792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.954 [2024-07-12 11:28:34.047053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.954 [2024-07-12 11:28:34.047263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.954 [2024-07-12 11:28:34.047282] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.954 [2024-07-12 11:28:34.047294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.954 [2024-07-12 11:28:34.050167] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.954 [2024-07-12 11:28:34.059484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.954 [2024-07-12 11:28:34.059884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.954 [2024-07-12 11:28:34.059912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.954 [2024-07-12 11:28:34.059928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.954 [2024-07-12 11:28:34.060157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.954 [2024-07-12 11:28:34.060385] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.954 [2024-07-12 11:28:34.060403] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.954 [2024-07-12 11:28:34.060416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.954 [2024-07-12 11:28:34.063388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.954 [2024-07-12 11:28:34.072534] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.954 [2024-07-12 11:28:34.072889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.954 [2024-07-12 11:28:34.072949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:07.954 [2024-07-12 11:28:34.072965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:07.954 [2024-07-12 11:28:34.073199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:07.954 [2024-07-12 11:28:34.073405] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.954 [2024-07-12 11:28:34.073423] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.954 [2024-07-12 11:28:34.073435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.954 [2024-07-12 11:28:34.076254] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.218 [2024-07-12 11:28:34.086252] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.218 [2024-07-12 11:28:34.086621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.218 [2024-07-12 11:28:34.086663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.218 [2024-07-12 11:28:34.086679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.218 [2024-07-12 11:28:34.086941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.218 [2024-07-12 11:28:34.087140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.218 [2024-07-12 11:28:34.087159] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.218 [2024-07-12 11:28:34.087186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.218 [2024-07-12 11:28:34.090357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.218 [2024-07-12 11:28:34.099379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.218 [2024-07-12 11:28:34.099713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.218 [2024-07-12 11:28:34.099741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.218 [2024-07-12 11:28:34.099762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.218 [2024-07-12 11:28:34.100018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.218 [2024-07-12 11:28:34.100265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.218 [2024-07-12 11:28:34.100284] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.218 [2024-07-12 11:28:34.100296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.218 [2024-07-12 11:28:34.103181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.218 [2024-07-12 11:28:34.112517] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.218 [2024-07-12 11:28:34.113008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.218 [2024-07-12 11:28:34.113048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.218 [2024-07-12 11:28:34.113064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.218 [2024-07-12 11:28:34.113310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.218 [2024-07-12 11:28:34.113502] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.218 [2024-07-12 11:28:34.113520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.218 [2024-07-12 11:28:34.113532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.218 [2024-07-12 11:28:34.116403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.218 [2024-07-12 11:28:34.125736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.218 [2024-07-12 11:28:34.126133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.218 [2024-07-12 11:28:34.126176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.218 [2024-07-12 11:28:34.126192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.218 [2024-07-12 11:28:34.126427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.218 [2024-07-12 11:28:34.126635] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.218 [2024-07-12 11:28:34.126653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.218 [2024-07-12 11:28:34.126666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.218 [2024-07-12 11:28:34.129562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.219 [2024-07-12 11:28:34.138966] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.219 [2024-07-12 11:28:34.139404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.219 [2024-07-12 11:28:34.139455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.219 [2024-07-12 11:28:34.139470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.219 [2024-07-12 11:28:34.139730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.219 [2024-07-12 11:28:34.139956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.219 [2024-07-12 11:28:34.139981] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.219 [2024-07-12 11:28:34.139995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.219 [2024-07-12 11:28:34.142877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.219 [2024-07-12 11:28:34.152004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.219 [2024-07-12 11:28:34.152366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.219 [2024-07-12 11:28:34.152392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.219 [2024-07-12 11:28:34.152408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.219 [2024-07-12 11:28:34.152643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.219 [2024-07-12 11:28:34.152851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.219 [2024-07-12 11:28:34.152880] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.219 [2024-07-12 11:28:34.152898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.219 [2024-07-12 11:28:34.155666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.219 [2024-07-12 11:28:34.165178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.219 [2024-07-12 11:28:34.165509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.219 [2024-07-12 11:28:34.165536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.219 [2024-07-12 11:28:34.165552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.219 [2024-07-12 11:28:34.165773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.219 [2024-07-12 11:28:34.165994] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.219 [2024-07-12 11:28:34.166015] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.219 [2024-07-12 11:28:34.166027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.219 [2024-07-12 11:28:34.168792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.219 [2024-07-12 11:28:34.178285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.219 [2024-07-12 11:28:34.178645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.219 [2024-07-12 11:28:34.178670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.219 [2024-07-12 11:28:34.178685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.219 [2024-07-12 11:28:34.178934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.219 [2024-07-12 11:28:34.179186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.219 [2024-07-12 11:28:34.179206] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.219 [2024-07-12 11:28:34.179219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.219 [2024-07-12 11:28:34.182589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.219 [2024-07-12 11:28:34.191503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.219 [2024-07-12 11:28:34.191923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.219 [2024-07-12 11:28:34.191952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.219 [2024-07-12 11:28:34.191968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.219 [2024-07-12 11:28:34.192221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.219 [2024-07-12 11:28:34.192414] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.219 [2024-07-12 11:28:34.192432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.219 [2024-07-12 11:28:34.192444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.219 [2024-07-12 11:28:34.195390] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.219 [2024-07-12 11:28:34.204602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.219 [2024-07-12 11:28:34.204992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.219 [2024-07-12 11:28:34.205020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.219 [2024-07-12 11:28:34.205036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.219 [2024-07-12 11:28:34.205290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.219 [2024-07-12 11:28:34.205497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.219 [2024-07-12 11:28:34.205516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.219 [2024-07-12 11:28:34.205528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.219 [2024-07-12 11:28:34.208424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.219 [2024-07-12 11:28:34.217909] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.219 [2024-07-12 11:28:34.218275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.219 [2024-07-12 11:28:34.218318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.219 [2024-07-12 11:28:34.218334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.219 [2024-07-12 11:28:34.218587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.219 [2024-07-12 11:28:34.218793] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.219 [2024-07-12 11:28:34.218811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.219 [2024-07-12 11:28:34.218823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.219 [2024-07-12 11:28:34.221717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.219 [2024-07-12 11:28:34.231074] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.219 [2024-07-12 11:28:34.231420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.219 [2024-07-12 11:28:34.231446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.219 [2024-07-12 11:28:34.231461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.219 [2024-07-12 11:28:34.231681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.219 [2024-07-12 11:28:34.231921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.219 [2024-07-12 11:28:34.231942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.219 [2024-07-12 11:28:34.231955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.219 [2024-07-12 11:28:34.234782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.219 [2024-07-12 11:28:34.244176] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.219 [2024-07-12 11:28:34.244606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.219 [2024-07-12 11:28:34.244648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.219 [2024-07-12 11:28:34.244665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.219 [2024-07-12 11:28:34.244918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.219 [2024-07-12 11:28:34.245123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.219 [2024-07-12 11:28:34.245143] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.219 [2024-07-12 11:28:34.245170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.219 [2024-07-12 11:28:34.248077] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.219 [2024-07-12 11:28:34.257245] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.219 [2024-07-12 11:28:34.257612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.219 [2024-07-12 11:28:34.257654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.219 [2024-07-12 11:28:34.257670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.219 [2024-07-12 11:28:34.257962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.219 [2024-07-12 11:28:34.258167] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.219 [2024-07-12 11:28:34.258202] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.219 [2024-07-12 11:28:34.258214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.219 [2024-07-12 11:28:34.261104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.219 [2024-07-12 11:28:34.270474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.219 [2024-07-12 11:28:34.270906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.219 [2024-07-12 11:28:34.270949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.219 [2024-07-12 11:28:34.270966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.219 [2024-07-12 11:28:34.271204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.219 [2024-07-12 11:28:34.271395] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.219 [2024-07-12 11:28:34.271413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.219 [2024-07-12 11:28:34.271431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.219 [2024-07-12 11:28:34.274364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.219 [2024-07-12 11:28:34.283811] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.219 [2024-07-12 11:28:34.284286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.219 [2024-07-12 11:28:34.284320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.219 [2024-07-12 11:28:34.284352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.219 [2024-07-12 11:28:34.284606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.219 [2024-07-12 11:28:34.284804] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.219 [2024-07-12 11:28:34.284822] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.219 [2024-07-12 11:28:34.284835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.219 [2024-07-12 11:28:34.287814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.219 [2024-07-12 11:28:34.296833] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.219 [2024-07-12 11:28:34.297183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.219 [2024-07-12 11:28:34.297211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.219 [2024-07-12 11:28:34.297226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.219 [2024-07-12 11:28:34.297447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.219 [2024-07-12 11:28:34.297655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.219 [2024-07-12 11:28:34.297674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.219 [2024-07-12 11:28:34.297686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.219 [2024-07-12 11:28:34.300580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.219 [2024-07-12 11:28:34.310066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.219 [2024-07-12 11:28:34.310446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.219 [2024-07-12 11:28:34.310487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.219 [2024-07-12 11:28:34.310503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.219 [2024-07-12 11:28:34.310748] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.219 [2024-07-12 11:28:34.310974] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.219 [2024-07-12 11:28:34.310995] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.219 [2024-07-12 11:28:34.311007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.219 [2024-07-12 11:28:34.313877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.219 [2024-07-12 11:28:34.323181] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.219 [2024-07-12 11:28:34.323673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.219 [2024-07-12 11:28:34.323719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.219 [2024-07-12 11:28:34.323736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.219 [2024-07-12 11:28:34.324000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.219 [2024-07-12 11:28:34.324214] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.219 [2024-07-12 11:28:34.324232] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.219 [2024-07-12 11:28:34.324244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.219 [2024-07-12 11:28:34.327115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.219 [2024-07-12 11:28:34.336394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.219 [2024-07-12 11:28:34.336744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.219 [2024-07-12 11:28:34.336785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.219 [2024-07-12 11:28:34.336800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.219 [2024-07-12 11:28:34.337076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.219 [2024-07-12 11:28:34.337307] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.219 [2024-07-12 11:28:34.337326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.219 [2024-07-12 11:28:34.337338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.219 [2024-07-12 11:28:34.340373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.494 [2024-07-12 11:28:34.349999] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.494 [2024-07-12 11:28:34.350362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.494 [2024-07-12 11:28:34.350392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.494 [2024-07-12 11:28:34.350409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.494 [2024-07-12 11:28:34.350641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.494 [2024-07-12 11:28:34.350855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.494 [2024-07-12 11:28:34.350907] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.494 [2024-07-12 11:28:34.350933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.494 [2024-07-12 11:28:34.354273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.494 [2024-07-12 11:28:34.363655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.494 [2024-07-12 11:28:34.364044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.494 [2024-07-12 11:28:34.364074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.494 [2024-07-12 11:28:34.364090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.494 [2024-07-12 11:28:34.364318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.494 [2024-07-12 11:28:34.364536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.494 [2024-07-12 11:28:34.364555] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.494 [2024-07-12 11:28:34.364568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.494 [2024-07-12 11:28:34.367639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.494 [2024-07-12 11:28:34.377016] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.494 [2024-07-12 11:28:34.377472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.494 [2024-07-12 11:28:34.377525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.494 [2024-07-12 11:28:34.377540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.494 [2024-07-12 11:28:34.377797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.494 [2024-07-12 11:28:34.378027] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.494 [2024-07-12 11:28:34.378048] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.494 [2024-07-12 11:28:34.378063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.494 [2024-07-12 11:28:34.381024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.494 [2024-07-12 11:28:34.390316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.494 [2024-07-12 11:28:34.390671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.494 [2024-07-12 11:28:34.390733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.494 [2024-07-12 11:28:34.390748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.494 [2024-07-12 11:28:34.391014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.494 [2024-07-12 11:28:34.391226] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.494 [2024-07-12 11:28:34.391245] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.494 [2024-07-12 11:28:34.391257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.494 [2024-07-12 11:28:34.394135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.494 [2024-07-12 11:28:34.403318] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.494 [2024-07-12 11:28:34.403685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.494 [2024-07-12 11:28:34.403712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.494 [2024-07-12 11:28:34.403727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.494 [2024-07-12 11:28:34.403973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.494 [2024-07-12 11:28:34.404193] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.494 [2024-07-12 11:28:34.404212] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.494 [2024-07-12 11:28:34.404224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.494 [2024-07-12 11:28:34.407131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.494 [2024-07-12 11:28:34.416487] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.494 [2024-07-12 11:28:34.416856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.494 [2024-07-12 11:28:34.416891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.494 [2024-07-12 11:28:34.416907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.494 [2024-07-12 11:28:34.417149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.494 [2024-07-12 11:28:34.417357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.494 [2024-07-12 11:28:34.417376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.494 [2024-07-12 11:28:34.417388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.494 [2024-07-12 11:28:34.420201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.494 [2024-07-12 11:28:34.429561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.494 [2024-07-12 11:28:34.429966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.494 [2024-07-12 11:28:34.429996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.494 [2024-07-12 11:28:34.430012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.494 [2024-07-12 11:28:34.430244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.494 [2024-07-12 11:28:34.430493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.494 [2024-07-12 11:28:34.430514] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.494 [2024-07-12 11:28:34.430527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.494 [2024-07-12 11:28:34.433945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.494 [2024-07-12 11:28:34.442798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.494 [2024-07-12 11:28:34.443229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.494 [2024-07-12 11:28:34.443256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.494 [2024-07-12 11:28:34.443271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.494 [2024-07-12 11:28:34.443508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.494 [2024-07-12 11:28:34.443715] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.494 [2024-07-12 11:28:34.443734] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.494 [2024-07-12 11:28:34.443746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.494 [2024-07-12 11:28:34.446787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.494 [2024-07-12 11:28:34.455947] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.495 [2024-07-12 11:28:34.456333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.495 [2024-07-12 11:28:34.456359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.495 [2024-07-12 11:28:34.456379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.495 [2024-07-12 11:28:34.456598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.495 [2024-07-12 11:28:34.456806] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.495 [2024-07-12 11:28:34.456824] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.495 [2024-07-12 11:28:34.456836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.495 [2024-07-12 11:28:34.459729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.495 [2024-07-12 11:28:34.469095] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.495 [2024-07-12 11:28:34.469476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.495 [2024-07-12 11:28:34.469503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.495 [2024-07-12 11:28:34.469518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.495 [2024-07-12 11:28:34.469751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.495 [2024-07-12 11:28:34.469976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.495 [2024-07-12 11:28:34.469998] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.495 [2024-07-12 11:28:34.470010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.495 [2024-07-12 11:28:34.472832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.495 [2024-07-12 11:28:34.482178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.495 [2024-07-12 11:28:34.482541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.495 [2024-07-12 11:28:34.482582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.495 [2024-07-12 11:28:34.482598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.495 [2024-07-12 11:28:34.482842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.495 [2024-07-12 11:28:34.483063] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.495 [2024-07-12 11:28:34.483082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.495 [2024-07-12 11:28:34.483094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.495 [2024-07-12 11:28:34.485906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.495 [2024-07-12 11:28:34.495255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.495 [2024-07-12 11:28:34.495598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.495 [2024-07-12 11:28:34.495626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.495 [2024-07-12 11:28:34.495642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.495 [2024-07-12 11:28:34.495863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.495 [2024-07-12 11:28:34.496089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.495 [2024-07-12 11:28:34.496114] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.495 [2024-07-12 11:28:34.496127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.495 [2024-07-12 11:28:34.499017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.495 [2024-07-12 11:28:34.508492] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.495 [2024-07-12 11:28:34.508860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.495 [2024-07-12 11:28:34.508895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.495 [2024-07-12 11:28:34.508911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.495 [2024-07-12 11:28:34.509154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.495 [2024-07-12 11:28:34.509362] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.495 [2024-07-12 11:28:34.509381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.495 [2024-07-12 11:28:34.509392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.495 [2024-07-12 11:28:34.512211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.495 [2024-07-12 11:28:34.521522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.495 [2024-07-12 11:28:34.521888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.495 [2024-07-12 11:28:34.521916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.495 [2024-07-12 11:28:34.521933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.495 [2024-07-12 11:28:34.522173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.495 [2024-07-12 11:28:34.522379] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.495 [2024-07-12 11:28:34.522397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.495 [2024-07-12 11:28:34.522409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.495 [2024-07-12 11:28:34.525303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.495 [2024-07-12 11:28:34.534632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.495 [2024-07-12 11:28:34.534995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.495 [2024-07-12 11:28:34.535023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.495 [2024-07-12 11:28:34.535040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.495 [2024-07-12 11:28:34.535279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.495 [2024-07-12 11:28:34.535486] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.495 [2024-07-12 11:28:34.535505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.495 [2024-07-12 11:28:34.535517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.495 [2024-07-12 11:28:34.538455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.495 [2024-07-12 11:28:34.548051] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.495 [2024-07-12 11:28:34.548375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.495 [2024-07-12 11:28:34.548417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.495 [2024-07-12 11:28:34.548433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.495 [2024-07-12 11:28:34.548655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.495 [2024-07-12 11:28:34.548896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.495 [2024-07-12 11:28:34.548932] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.495 [2024-07-12 11:28:34.548945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.495 [2024-07-12 11:28:34.552098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.495 [2024-07-12 11:28:34.561520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.495 [2024-07-12 11:28:34.561923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.495 [2024-07-12 11:28:34.561953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.495 [2024-07-12 11:28:34.561969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.495 [2024-07-12 11:28:34.562210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.495 [2024-07-12 11:28:34.562462] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.495 [2024-07-12 11:28:34.562482] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.495 [2024-07-12 11:28:34.562495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.495 [2024-07-12 11:28:34.565653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.495 [2024-07-12 11:28:34.575073] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.495 [2024-07-12 11:28:34.575414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.495 [2024-07-12 11:28:34.575456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.495 [2024-07-12 11:28:34.575472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.495 [2024-07-12 11:28:34.575693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.496 [2024-07-12 11:28:34.575941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.496 [2024-07-12 11:28:34.575964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.496 [2024-07-12 11:28:34.575979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.496 [2024-07-12 11:28:34.579055] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.496 [2024-07-12 11:28:34.588452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.496 [2024-07-12 11:28:34.588885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.496 [2024-07-12 11:28:34.588914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.496 [2024-07-12 11:28:34.588931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.496 [2024-07-12 11:28:34.589165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.496 [2024-07-12 11:28:34.589376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.496 [2024-07-12 11:28:34.589395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.496 [2024-07-12 11:28:34.589407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.496 [2024-07-12 11:28:34.592401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.496 [2024-07-12 11:28:34.601738] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.496 [2024-07-12 11:28:34.602125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.496 [2024-07-12 11:28:34.602154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.496 [2024-07-12 11:28:34.602185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.496 [2024-07-12 11:28:34.602419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.496 [2024-07-12 11:28:34.602611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.496 [2024-07-12 11:28:34.602630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.496 [2024-07-12 11:28:34.602642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.496 [2024-07-12 11:28:34.605658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.496 [2024-07-12 11:28:34.615120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.496 [2024-07-12 11:28:34.615502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.496 [2024-07-12 11:28:34.615530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.496 [2024-07-12 11:28:34.615546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.496 [2024-07-12 11:28:34.615786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.496 [2024-07-12 11:28:34.616036] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.496 [2024-07-12 11:28:34.616058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.496 [2024-07-12 11:28:34.616072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.496 [2024-07-12 11:28:34.618993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.755 [2024-07-12 11:28:34.628583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.755 [2024-07-12 11:28:34.628956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.755 [2024-07-12 11:28:34.629005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.755 [2024-07-12 11:28:34.629025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.755 [2024-07-12 11:28:34.629279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.755 [2024-07-12 11:28:34.629471] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.755 [2024-07-12 11:28:34.629489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.755 [2024-07-12 11:28:34.629506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.755 [2024-07-12 11:28:34.632695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.755 [2024-07-12 11:28:34.641730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.755 [2024-07-12 11:28:34.642178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.755 [2024-07-12 11:28:34.642229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.755 [2024-07-12 11:28:34.642245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.755 [2024-07-12 11:28:34.642507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.755 [2024-07-12 11:28:34.642699] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.755 [2024-07-12 11:28:34.642717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.755 [2024-07-12 11:28:34.642729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.755 [2024-07-12 11:28:34.645627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.755 [2024-07-12 11:28:34.655033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.755 [2024-07-12 11:28:34.655536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.755 [2024-07-12 11:28:34.655588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.755 [2024-07-12 11:28:34.655604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.755 [2024-07-12 11:28:34.655875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.755 [2024-07-12 11:28:34.656093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.755 [2024-07-12 11:28:34.656113] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.755 [2024-07-12 11:28:34.656126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.755 [2024-07-12 11:28:34.659024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.755 [2024-07-12 11:28:34.668307] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.755 [2024-07-12 11:28:34.668785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.755 [2024-07-12 11:28:34.668838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.755 [2024-07-12 11:28:34.668854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.755 [2024-07-12 11:28:34.669111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.755 [2024-07-12 11:28:34.669320] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.755 [2024-07-12 11:28:34.669338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.755 [2024-07-12 11:28:34.669351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.755 [2024-07-12 11:28:34.672250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.755 [2024-07-12 11:28:34.681553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.755 [2024-07-12 11:28:34.681920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.755 [2024-07-12 11:28:34.681954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.755 [2024-07-12 11:28:34.681971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.755 [2024-07-12 11:28:34.682185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.755 [2024-07-12 11:28:34.682405] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.755 [2024-07-12 11:28:34.682425] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.755 [2024-07-12 11:28:34.682453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.755 [2024-07-12 11:28:34.685896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.755 [2024-07-12 11:28:34.694809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.755 [2024-07-12 11:28:34.695221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.755 [2024-07-12 11:28:34.695269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.755 [2024-07-12 11:28:34.695285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.755 [2024-07-12 11:28:34.695525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.755 [2024-07-12 11:28:34.695731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.755 [2024-07-12 11:28:34.695749] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.755 [2024-07-12 11:28:34.695761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.755 [2024-07-12 11:28:34.698720] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.755 [2024-07-12 11:28:34.708049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.755 [2024-07-12 11:28:34.708490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.755 [2024-07-12 11:28:34.708541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.755 [2024-07-12 11:28:34.708557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.755 [2024-07-12 11:28:34.708802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.755 [2024-07-12 11:28:34.709023] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.755 [2024-07-12 11:28:34.709043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.755 [2024-07-12 11:28:34.709055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.755 [2024-07-12 11:28:34.711973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.755 [2024-07-12 11:28:34.721216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.755 [2024-07-12 11:28:34.721707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.755 [2024-07-12 11:28:34.721750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.755 [2024-07-12 11:28:34.721768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.755 [2024-07-12 11:28:34.722036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.755 [2024-07-12 11:28:34.722256] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.755 [2024-07-12 11:28:34.722275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.756 [2024-07-12 11:28:34.722287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.756 [2024-07-12 11:28:34.725163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.756 [2024-07-12 11:28:34.734314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.756 [2024-07-12 11:28:34.734647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.756 [2024-07-12 11:28:34.734675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.756 [2024-07-12 11:28:34.734691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.756 [2024-07-12 11:28:34.734944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.756 [2024-07-12 11:28:34.735144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.756 [2024-07-12 11:28:34.735163] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.756 [2024-07-12 11:28:34.735190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.756 [2024-07-12 11:28:34.738099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.756 [2024-07-12 11:28:34.747441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.756 [2024-07-12 11:28:34.747809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.756 [2024-07-12 11:28:34.747852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.756 [2024-07-12 11:28:34.747876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.756 [2024-07-12 11:28:34.748136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.756 [2024-07-12 11:28:34.748344] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.756 [2024-07-12 11:28:34.748362] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.756 [2024-07-12 11:28:34.748374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.756 [2024-07-12 11:28:34.751227] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.756 [2024-07-12 11:28:34.760486] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.756 [2024-07-12 11:28:34.760917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.756 [2024-07-12 11:28:34.760959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.756 [2024-07-12 11:28:34.760976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.756 [2024-07-12 11:28:34.761217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.756 [2024-07-12 11:28:34.761423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.756 [2024-07-12 11:28:34.761441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.756 [2024-07-12 11:28:34.761453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.756 [2024-07-12 11:28:34.764425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.756 [2024-07-12 11:28:34.773713] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.756 [2024-07-12 11:28:34.774163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.756 [2024-07-12 11:28:34.774205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.756 [2024-07-12 11:28:34.774221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.756 [2024-07-12 11:28:34.774457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.756 [2024-07-12 11:28:34.774648] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.756 [2024-07-12 11:28:34.774666] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.756 [2024-07-12 11:28:34.774678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.756 [2024-07-12 11:28:34.777492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.756 [2024-07-12 11:28:34.786717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.756 [2024-07-12 11:28:34.787066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.756 [2024-07-12 11:28:34.787093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.756 [2024-07-12 11:28:34.787109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.756 [2024-07-12 11:28:34.787329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.756 [2024-07-12 11:28:34.787537] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.756 [2024-07-12 11:28:34.787555] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.756 [2024-07-12 11:28:34.787567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.756 [2024-07-12 11:28:34.790531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.756 [2024-07-12 11:28:34.799972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.756 [2024-07-12 11:28:34.800331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.756 [2024-07-12 11:28:34.800368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.756 [2024-07-12 11:28:34.800400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.756 [2024-07-12 11:28:34.800634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.756 [2024-07-12 11:28:34.800825] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.756 [2024-07-12 11:28:34.800843] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.756 [2024-07-12 11:28:34.800882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.756 [2024-07-12 11:28:34.803790] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.756 [2024-07-12 11:28:34.813077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.756 [2024-07-12 11:28:34.813595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.756 [2024-07-12 11:28:34.813637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.756 [2024-07-12 11:28:34.813658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.756 [2024-07-12 11:28:34.813939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.756 [2024-07-12 11:28:34.814132] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.756 [2024-07-12 11:28:34.814151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.756 [2024-07-12 11:28:34.814163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.756 [2024-07-12 11:28:34.816931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.756 [2024-07-12 11:28:34.826375] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.756 [2024-07-12 11:28:34.826813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.756 [2024-07-12 11:28:34.826841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.756 [2024-07-12 11:28:34.826858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.756 [2024-07-12 11:28:34.827101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.756 [2024-07-12 11:28:34.827333] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.756 [2024-07-12 11:28:34.827352] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.756 [2024-07-12 11:28:34.827365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.756 [2024-07-12 11:28:34.830293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.756 [2024-07-12 11:28:34.839499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.756 [2024-07-12 11:28:34.839877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.756 [2024-07-12 11:28:34.839922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.756 [2024-07-12 11:28:34.839938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.757 [2024-07-12 11:28:34.840202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.757 [2024-07-12 11:28:34.840394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.757 [2024-07-12 11:28:34.840412] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.757 [2024-07-12 11:28:34.840424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.757 [2024-07-12 11:28:34.843202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.757 [2024-07-12 11:28:34.852558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.757 [2024-07-12 11:28:34.852925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.757 [2024-07-12 11:28:34.852968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.757 [2024-07-12 11:28:34.852984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.757 [2024-07-12 11:28:34.853235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.757 [2024-07-12 11:28:34.853441] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.757 [2024-07-12 11:28:34.853464] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.757 [2024-07-12 11:28:34.853476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.757 [2024-07-12 11:28:34.856369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.757 [2024-07-12 11:28:34.865736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.757 [2024-07-12 11:28:34.866065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.757 [2024-07-12 11:28:34.866108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.757 [2024-07-12 11:28:34.866124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.757 [2024-07-12 11:28:34.866354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.757 [2024-07-12 11:28:34.866562] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.757 [2024-07-12 11:28:34.866580] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.757 [2024-07-12 11:28:34.866592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.757 [2024-07-12 11:28:34.869448] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.757 [2024-07-12 11:28:34.878761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.757 [2024-07-12 11:28:34.879150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.757 [2024-07-12 11:28:34.879192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:08.757 [2024-07-12 11:28:34.879207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:08.757 [2024-07-12 11:28:34.879459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:08.757 [2024-07-12 11:28:34.879665] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.757 [2024-07-12 11:28:34.879684] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.757 [2024-07-12 11:28:34.879696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.757 [2024-07-12 11:28:34.882643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.015 [2024-07-12 11:28:34.892297] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.015 [2024-07-12 11:28:34.892816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.015 [2024-07-12 11:28:34.892874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.015 [2024-07-12 11:28:34.892892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.015 [2024-07-12 11:28:34.893145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.015 [2024-07-12 11:28:34.893370] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.015 [2024-07-12 11:28:34.893388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.015 [2024-07-12 11:28:34.893400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.015 [2024-07-12 11:28:34.896302] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.015 [2024-07-12 11:28:34.905286] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.015 [2024-07-12 11:28:34.905784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.015 [2024-07-12 11:28:34.905825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.015 [2024-07-12 11:28:34.905842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.015 [2024-07-12 11:28:34.906094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.015 [2024-07-12 11:28:34.906320] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.015 [2024-07-12 11:28:34.906339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.015 [2024-07-12 11:28:34.906351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.015 [2024-07-12 11:28:34.909296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.015 [2024-07-12 11:28:34.918361] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.015 [2024-07-12 11:28:34.918738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.015 [2024-07-12 11:28:34.918787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.015 [2024-07-12 11:28:34.918820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.015 [2024-07-12 11:28:34.919088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.015 [2024-07-12 11:28:34.919298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.015 [2024-07-12 11:28:34.919317] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.015 [2024-07-12 11:28:34.919329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.015 [2024-07-12 11:28:34.922216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.015 [2024-07-12 11:28:34.931543] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.015 [2024-07-12 11:28:34.931928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.015 [2024-07-12 11:28:34.931956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.015 [2024-07-12 11:28:34.931972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.015 [2024-07-12 11:28:34.932215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.015 [2024-07-12 11:28:34.932463] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.015 [2024-07-12 11:28:34.932483] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.015 [2024-07-12 11:28:34.932496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.015 [2024-07-12 11:28:34.935917] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.015 [2024-07-12 11:28:34.944635] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.015 [2024-07-12 11:28:34.945030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.015 [2024-07-12 11:28:34.945058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.015 [2024-07-12 11:28:34.945074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.015 [2024-07-12 11:28:34.945321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.015 [2024-07-12 11:28:34.945529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.015 [2024-07-12 11:28:34.945548] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.015 [2024-07-12 11:28:34.945560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.015 [2024-07-12 11:28:34.948573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.015 [2024-07-12 11:28:34.957927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.015 [2024-07-12 11:28:34.958253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.015 [2024-07-12 11:28:34.958292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.016 [2024-07-12 11:28:34.958307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.016 [2024-07-12 11:28:34.958521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.016 [2024-07-12 11:28:34.958729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.016 [2024-07-12 11:28:34.958748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.016 [2024-07-12 11:28:34.958760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.016 [2024-07-12 11:28:34.961600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.016 [2024-07-12 11:28:34.971027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.016 [2024-07-12 11:28:34.971347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.016 [2024-07-12 11:28:34.971373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.016 [2024-07-12 11:28:34.971389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.016 [2024-07-12 11:28:34.971606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.016 [2024-07-12 11:28:34.971815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.016 [2024-07-12 11:28:34.971834] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.016 [2024-07-12 11:28:34.971845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.016 [2024-07-12 11:28:34.974776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.016 [2024-07-12 11:28:34.984273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.016 [2024-07-12 11:28:34.984765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.016 [2024-07-12 11:28:34.984807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.016 [2024-07-12 11:28:34.984824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.016 [2024-07-12 11:28:34.985077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.016 [2024-07-12 11:28:34.985306] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.016 [2024-07-12 11:28:34.985325] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.016 [2024-07-12 11:28:34.985341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.016 [2024-07-12 11:28:34.988151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.016 [2024-07-12 11:28:34.997251] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.016 [2024-07-12 11:28:34.997611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.016 [2024-07-12 11:28:34.997637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.016 [2024-07-12 11:28:34.997653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.016 [2024-07-12 11:28:34.997896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.016 [2024-07-12 11:28:34.998106] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.016 [2024-07-12 11:28:34.998125] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.016 [2024-07-12 11:28:34.998137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.016 [2024-07-12 11:28:35.000927] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.016 [2024-07-12 11:28:35.010351] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.016 [2024-07-12 11:28:35.010845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.016 [2024-07-12 11:28:35.010893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.016 [2024-07-12 11:28:35.010912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.016 [2024-07-12 11:28:35.011176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.016 [2024-07-12 11:28:35.011368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.016 [2024-07-12 11:28:35.011386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.016 [2024-07-12 11:28:35.011398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.016 [2024-07-12 11:28:35.014254] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.016 [2024-07-12 11:28:35.023509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.016 [2024-07-12 11:28:35.023835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.016 [2024-07-12 11:28:35.023863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.016 [2024-07-12 11:28:35.023909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.016 [2024-07-12 11:28:35.024153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.016 [2024-07-12 11:28:35.024361] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.016 [2024-07-12 11:28:35.024379] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.016 [2024-07-12 11:28:35.024391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.016 [2024-07-12 11:28:35.027276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.016 [2024-07-12 11:28:35.036477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.016 [2024-07-12 11:28:35.036806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.016 [2024-07-12 11:28:35.036837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.016 [2024-07-12 11:28:35.036853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.016 [2024-07-12 11:28:35.037098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.016 [2024-07-12 11:28:35.037323] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.016 [2024-07-12 11:28:35.037342] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.016 [2024-07-12 11:28:35.037354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.016 [2024-07-12 11:28:35.040281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.016 [2024-07-12 11:28:35.049604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.016 [2024-07-12 11:28:35.049970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.016 [2024-07-12 11:28:35.050011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.016 [2024-07-12 11:28:35.050027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.016 [2024-07-12 11:28:35.050276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.016 [2024-07-12 11:28:35.050483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.016 [2024-07-12 11:28:35.050501] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.016 [2024-07-12 11:28:35.050513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.016 [2024-07-12 11:28:35.053405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.016 [2024-07-12 11:28:35.062624] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.016 [2024-07-12 11:28:35.063022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.016 [2024-07-12 11:28:35.063049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.016 [2024-07-12 11:28:35.063065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.016 [2024-07-12 11:28:35.063286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.016 [2024-07-12 11:28:35.063489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.016 [2024-07-12 11:28:35.063508] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.016 [2024-07-12 11:28:35.063521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.016 [2024-07-12 11:28:35.066447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.016 [2024-07-12 11:28:35.075658] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.016 [2024-07-12 11:28:35.076022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.016 [2024-07-12 11:28:35.076048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.016 [2024-07-12 11:28:35.076064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.016 [2024-07-12 11:28:35.076278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.016 [2024-07-12 11:28:35.076489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.016 [2024-07-12 11:28:35.076507] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.016 [2024-07-12 11:28:35.076519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.016 [2024-07-12 11:28:35.079308] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.016 [2024-07-12 11:28:35.088701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.016 [2024-07-12 11:28:35.089088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.016 [2024-07-12 11:28:35.089115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.016 [2024-07-12 11:28:35.089146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.016 [2024-07-12 11:28:35.089367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.016 [2024-07-12 11:28:35.089575] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.016 [2024-07-12 11:28:35.089593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.016 [2024-07-12 11:28:35.089605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.016 [2024-07-12 11:28:35.092423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.016 [2024-07-12 11:28:35.102065] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.016 [2024-07-12 11:28:35.102502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.017 [2024-07-12 11:28:35.102543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.017 [2024-07-12 11:28:35.102559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.017 [2024-07-12 11:28:35.102779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.017 [2024-07-12 11:28:35.103018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.017 [2024-07-12 11:28:35.103039] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.017 [2024-07-12 11:28:35.103051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.017 [2024-07-12 11:28:35.105934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.017 [2024-07-12 11:28:35.115202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.017 [2024-07-12 11:28:35.115529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.017 [2024-07-12 11:28:35.115556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.017 [2024-07-12 11:28:35.115571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.017 [2024-07-12 11:28:35.115792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.017 [2024-07-12 11:28:35.116030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.017 [2024-07-12 11:28:35.116058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.017 [2024-07-12 11:28:35.116072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.017 [2024-07-12 11:28:35.118863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.017 [2024-07-12 11:28:35.128333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.017 [2024-07-12 11:28:35.128666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.017 [2024-07-12 11:28:35.128693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.017 [2024-07-12 11:28:35.128709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.017 [2024-07-12 11:28:35.128945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.017 [2024-07-12 11:28:35.129155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.017 [2024-07-12 11:28:35.129174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.017 [2024-07-12 11:28:35.129185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.017 [2024-07-12 11:28:35.131957] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.017 [2024-07-12 11:28:35.141418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.017 [2024-07-12 11:28:35.141812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.017 [2024-07-12 11:28:35.141839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.017 [2024-07-12 11:28:35.141855] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.017 [2024-07-12 11:28:35.142122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.017 [2024-07-12 11:28:35.142331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.017 [2024-07-12 11:28:35.142350] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.017 [2024-07-12 11:28:35.142361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.017 [2024-07-12 11:28:35.145633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.294 [2024-07-12 11:28:35.154792] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.294 [2024-07-12 11:28:35.155182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.294 [2024-07-12 11:28:35.155225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.294 [2024-07-12 11:28:35.155241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.294 [2024-07-12 11:28:35.155474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.294 [2024-07-12 11:28:35.155681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.294 [2024-07-12 11:28:35.155700] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.294 [2024-07-12 11:28:35.155712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.294 [2024-07-12 11:28:35.158541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 676662 Killed "${NVMF_APP[@]}" "$@" 00:24:09.294 11:28:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:24:09.294 11:28:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:09.294 11:28:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:09.294 11:28:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:09.294 11:28:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:09.294 [2024-07-12 11:28:35.168046] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.294 [2024-07-12 11:28:35.168429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.294 [2024-07-12 11:28:35.168456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.294 [2024-07-12 11:28:35.168472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.294 [2024-07-12 11:28:35.168715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.294 [2024-07-12 11:28:35.168966] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.294 [2024-07-12 11:28:35.168988] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.294 [2024-07-12 11:28:35.169002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.294 11:28:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=677584 00:24:09.294 11:28:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:09.294 11:28:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 677584 00:24:09.294 11:28:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 677584 ']' 00:24:09.294 11:28:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.294 11:28:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:09.294 11:28:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.294 11:28:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:09.294 11:28:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:09.294 [2024-07-12 11:28:35.172282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.294 [2024-07-12 11:28:35.181433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.294 [2024-07-12 11:28:35.181878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.294 [2024-07-12 11:28:35.181906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.294 [2024-07-12 11:28:35.181923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.294 [2024-07-12 11:28:35.182136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.294 [2024-07-12 11:28:35.182378] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.294 [2024-07-12 11:28:35.182398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.294 [2024-07-12 11:28:35.182411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.294 [2024-07-12 11:28:35.185812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.294 [2024-07-12 11:28:35.194831] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.294 [2024-07-12 11:28:35.195189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.294 [2024-07-12 11:28:35.195217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.294 [2024-07-12 11:28:35.195233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.294 [2024-07-12 11:28:35.195466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.294 [2024-07-12 11:28:35.195679] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.294 [2024-07-12 11:28:35.195698] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.294 [2024-07-12 11:28:35.195711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.294 [2024-07-12 11:28:35.198774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.294 [2024-07-12 11:28:35.208276] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.294 [2024-07-12 11:28:35.208632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.294 [2024-07-12 11:28:35.208660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.294 [2024-07-12 11:28:35.208677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.294 [2024-07-12 11:28:35.208936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.294 [2024-07-12 11:28:35.209156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.294 [2024-07-12 11:28:35.209177] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.294 [2024-07-12 11:28:35.209191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.294 [2024-07-12 11:28:35.212241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.294 [2024-07-12 11:28:35.214240] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:24:09.294 [2024-07-12 11:28:35.214315] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:09.294 [2024-07-12 11:28:35.221512] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.294 [2024-07-12 11:28:35.221842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.294 [2024-07-12 11:28:35.221894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.294 [2024-07-12 11:28:35.221911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.294 [2024-07-12 11:28:35.222153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.294 [2024-07-12 11:28:35.222386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.294 [2024-07-12 11:28:35.222406] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.294 [2024-07-12 11:28:35.222419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.294 [2024-07-12 11:28:35.225575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.294 [2024-07-12 11:28:35.235023] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.294 [2024-07-12 11:28:35.235418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.294 [2024-07-12 11:28:35.235446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.294 [2024-07-12 11:28:35.235462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.294 [2024-07-12 11:28:35.235709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.294 [2024-07-12 11:28:35.235933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.294 [2024-07-12 11:28:35.235954] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.294 [2024-07-12 11:28:35.235967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.294 [2024-07-12 11:28:35.238986] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.294 [2024-07-12 11:28:35.248302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.294 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.294 [2024-07-12 11:28:35.248616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.294 [2024-07-12 11:28:35.248657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.294 [2024-07-12 11:28:35.248673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.294 [2024-07-12 11:28:35.248929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.294 [2024-07-12 11:28:35.249152] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.294 [2024-07-12 11:28:35.249172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.294 [2024-07-12 11:28:35.249200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.294 [2024-07-12 11:28:35.252305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.295 [2024-07-12 11:28:35.261674] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.295 [2024-07-12 11:28:35.262044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.295 [2024-07-12 11:28:35.262072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.295 [2024-07-12 11:28:35.262089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.295 [2024-07-12 11:28:35.262322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.295 [2024-07-12 11:28:35.262520] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.295 [2024-07-12 11:28:35.262539] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.295 [2024-07-12 11:28:35.262551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.295 [2024-07-12 11:28:35.265638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.295 [2024-07-12 11:28:35.274919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.295 [2024-07-12 11:28:35.275327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.295 [2024-07-12 11:28:35.275355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.295 [2024-07-12 11:28:35.275371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.295 [2024-07-12 11:28:35.275613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.295 [2024-07-12 11:28:35.275812] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.295 [2024-07-12 11:28:35.275831] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.295 [2024-07-12 11:28:35.275884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.295 [2024-07-12 11:28:35.278534] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:09.295 [2024-07-12 11:28:35.278908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.295 [2024-07-12 11:28:35.288303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.295 [2024-07-12 11:28:35.288808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.295 [2024-07-12 11:28:35.288892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.295 [2024-07-12 11:28:35.288918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.295 [2024-07-12 11:28:35.289166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.295 [2024-07-12 11:28:35.289372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.295 [2024-07-12 11:28:35.289392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.295 [2024-07-12 11:28:35.289407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.295 [2024-07-12 11:28:35.292393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.295 [2024-07-12 11:28:35.301642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.295 [2024-07-12 11:28:35.302024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.295 [2024-07-12 11:28:35.302055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.295 [2024-07-12 11:28:35.302073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.295 [2024-07-12 11:28:35.302314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.295 [2024-07-12 11:28:35.302518] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.295 [2024-07-12 11:28:35.302538] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.295 [2024-07-12 11:28:35.302551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.295 [2024-07-12 11:28:35.305622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.295 [2024-07-12 11:28:35.315015] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.295 [2024-07-12 11:28:35.315476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.295 [2024-07-12 11:28:35.315504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.295 [2024-07-12 11:28:35.315520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.295 [2024-07-12 11:28:35.315762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.295 [2024-07-12 11:28:35.315987] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.295 [2024-07-12 11:28:35.316008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.295 [2024-07-12 11:28:35.316021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.295 [2024-07-12 11:28:35.319002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.295 [2024-07-12 11:28:35.328282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.295 [2024-07-12 11:28:35.328647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.295 [2024-07-12 11:28:35.328675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.295 [2024-07-12 11:28:35.328691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.295 [2024-07-12 11:28:35.328931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.295 [2024-07-12 11:28:35.329136] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.295 [2024-07-12 11:28:35.329156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.295 [2024-07-12 11:28:35.329184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.295 [2024-07-12 11:28:35.332143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.295 [2024-07-12 11:28:35.341636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.295 [2024-07-12 11:28:35.342163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.295 [2024-07-12 11:28:35.342202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.295 [2024-07-12 11:28:35.342221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.295 [2024-07-12 11:28:35.342468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.295 [2024-07-12 11:28:35.342670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.295 [2024-07-12 11:28:35.342690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.295 [2024-07-12 11:28:35.342705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.295 [2024-07-12 11:28:35.345679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.295 [2024-07-12 11:28:35.354992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.295 [2024-07-12 11:28:35.355417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.295 [2024-07-12 11:28:35.355460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.295 [2024-07-12 11:28:35.355477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.295 [2024-07-12 11:28:35.355723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.295 [2024-07-12 11:28:35.355955] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.295 [2024-07-12 11:28:35.355977] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.295 [2024-07-12 11:28:35.355991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.295 [2024-07-12 11:28:35.358958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.295 [2024-07-12 11:28:35.368288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.295 [2024-07-12 11:28:35.368634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.295 [2024-07-12 11:28:35.368662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.295 [2024-07-12 11:28:35.368679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.295 [2024-07-12 11:28:35.368939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.295 [2024-07-12 11:28:35.369143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.295 [2024-07-12 11:28:35.369178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.295 [2024-07-12 11:28:35.369191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.295 [2024-07-12 11:28:35.372175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.295 [2024-07-12 11:28:35.381616] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.295 [2024-07-12 11:28:35.382018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.295 [2024-07-12 11:28:35.382049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.295 [2024-07-12 11:28:35.382066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.295 [2024-07-12 11:28:35.382293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.295 [2024-07-12 11:28:35.382507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.295 [2024-07-12 11:28:35.382526] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.295 [2024-07-12 11:28:35.382539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.295 [2024-07-12 11:28:35.385506] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.295 [2024-07-12 11:28:35.385706] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:09.295 [2024-07-12 11:28:35.385736] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:09.295 [2024-07-12 11:28:35.385750] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:09.295 [2024-07-12 11:28:35.385761] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:09.295 [2024-07-12 11:28:35.385770] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:09.295 [2024-07-12 11:28:35.385850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:09.295 [2024-07-12 11:28:35.386008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:09.295 [2024-07-12 11:28:35.386013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.295 [2024-07-12 11:28:35.395047] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.295 [2024-07-12 11:28:35.395541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.295 [2024-07-12 11:28:35.395577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.295 [2024-07-12 11:28:35.395596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.295 [2024-07-12 11:28:35.395832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.295 [2024-07-12 11:28:35.396081] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.295 [2024-07-12 11:28:35.396104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.295 [2024-07-12 11:28:35.396121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.295 [2024-07-12 11:28:35.399293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.295 [2024-07-12 11:28:35.408554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.295 [2024-07-12 11:28:35.409050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.295 [2024-07-12 11:28:35.409089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.295 [2024-07-12 11:28:35.409107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.295 [2024-07-12 11:28:35.409344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.295 [2024-07-12 11:28:35.409558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.295 [2024-07-12 11:28:35.409579] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.295 [2024-07-12 11:28:35.409595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.295 [2024-07-12 11:28:35.412754] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.295 [2024-07-12 11:28:35.422158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.295 [2024-07-12 11:28:35.422583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.295 [2024-07-12 11:28:35.422621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.295 [2024-07-12 11:28:35.422640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.295 [2024-07-12 11:28:35.422878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.295 [2024-07-12 11:28:35.423101] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.295 [2024-07-12 11:28:35.423123] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.295 [2024-07-12 11:28:35.423139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.554 [2024-07-12 11:28:35.426542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.554 [2024-07-12 11:28:35.435716] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.554 [2024-07-12 11:28:35.436203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.554 [2024-07-12 11:28:35.436240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.554 [2024-07-12 11:28:35.436259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.554 [2024-07-12 11:28:35.436481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.554 [2024-07-12 11:28:35.436700] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.554 [2024-07-12 11:28:35.436721] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.554 [2024-07-12 11:28:35.436737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.554 [2024-07-12 11:28:35.439969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.554 [2024-07-12 11:28:35.449406] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.554 [2024-07-12 11:28:35.449891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.554 [2024-07-12 11:28:35.449930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.554 [2024-07-12 11:28:35.449949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.554 [2024-07-12 11:28:35.450196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.554 [2024-07-12 11:28:35.450410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.554 [2024-07-12 11:28:35.450430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.554 [2024-07-12 11:28:35.450447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.554 [2024-07-12 11:28:35.453707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.554 [2024-07-12 11:28:35.463006] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.554 [2024-07-12 11:28:35.463478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.554 [2024-07-12 11:28:35.463514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.554 [2024-07-12 11:28:35.463532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.554 [2024-07-12 11:28:35.463752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.554 [2024-07-12 11:28:35.463981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.554 [2024-07-12 11:28:35.464004] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.554 [2024-07-12 11:28:35.464020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.554 [2024-07-12 11:28:35.467206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.554 [2024-07-12 11:28:35.476538] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.554 [2024-07-12 11:28:35.476924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.554 [2024-07-12 11:28:35.476953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.554 [2024-07-12 11:28:35.476969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.554 [2024-07-12 11:28:35.477183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.554 [2024-07-12 11:28:35.477409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.554 [2024-07-12 11:28:35.477429] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.554 [2024-07-12 11:28:35.477444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.554 [2024-07-12 11:28:35.480593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.554 [2024-07-12 11:28:35.490235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.554 [2024-07-12 11:28:35.490565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.554 [2024-07-12 11:28:35.490593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.554 [2024-07-12 11:28:35.490610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.554 [2024-07-12 11:28:35.490823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.554 [2024-07-12 11:28:35.491049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.554 [2024-07-12 11:28:35.491071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.554 [2024-07-12 11:28:35.491093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.554 [2024-07-12 11:28:35.494335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.554 11:28:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:09.554 11:28:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:24:09.554 11:28:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:09.554 11:28:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:09.554 11:28:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:09.554 [2024-07-12 11:28:35.503712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.554 [2024-07-12 11:28:35.504058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.554 [2024-07-12 11:28:35.504088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.554 [2024-07-12 11:28:35.504104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.554 [2024-07-12 11:28:35.504334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.554 [2024-07-12 11:28:35.504545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.554 [2024-07-12 11:28:35.504566] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.554 [2024-07-12 11:28:35.504580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.554 [2024-07-12 11:28:35.507789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.554 [2024-07-12 11:28:35.517438] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.554 [2024-07-12 11:28:35.517784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.554 [2024-07-12 11:28:35.517812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.554 [2024-07-12 11:28:35.517828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.554 [2024-07-12 11:28:35.518058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.554 [2024-07-12 11:28:35.518290] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.554 [2024-07-12 11:28:35.518311] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.554 [2024-07-12 11:28:35.518324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.554 [2024-07-12 11:28:35.521547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.554 11:28:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:09.554 11:28:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:09.554 11:28:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.554 11:28:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:09.554 [2024-07-12 11:28:35.529097] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:09.554 [2024-07-12 11:28:35.531144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.554 [2024-07-12 11:28:35.531513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.554 [2024-07-12 11:28:35.531541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.554 [2024-07-12 11:28:35.531557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.554 [2024-07-12 11:28:35.531791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.554 [2024-07-12 11:28:35.532034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.554 [2024-07-12 11:28:35.532073] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.554 [2024-07-12 11:28:35.532088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.554 11:28:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.554 11:28:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:09.554 11:28:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.554 11:28:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:09.554 [2024-07-12 11:28:35.535356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.554 [2024-07-12 11:28:35.544648] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.554 [2024-07-12 11:28:35.544998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.554 [2024-07-12 11:28:35.545026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.554 [2024-07-12 11:28:35.545043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.555 [2024-07-12 11:28:35.545274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.555 [2024-07-12 11:28:35.545493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.555 [2024-07-12 11:28:35.545513] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.555 [2024-07-12 11:28:35.545525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.555 [2024-07-12 11:28:35.548699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.555 [2024-07-12 11:28:35.558186] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.555 [2024-07-12 11:28:35.558572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.555 [2024-07-12 11:28:35.558603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.555 [2024-07-12 11:28:35.558621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.555 [2024-07-12 11:28:35.558854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.555 [2024-07-12 11:28:35.559082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.555 [2024-07-12 11:28:35.559103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.555 [2024-07-12 11:28:35.559118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.555 [2024-07-12 11:28:35.562279] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.555 [2024-07-12 11:28:35.571712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.555 [2024-07-12 11:28:35.572181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.555 [2024-07-12 11:28:35.572216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.555 [2024-07-12 11:28:35.572236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.555 [2024-07-12 11:28:35.572472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.555 [2024-07-12 11:28:35.572693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.555 [2024-07-12 11:28:35.572714] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.555 [2024-07-12 11:28:35.572730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.555 Malloc0 00:24:09.555 11:28:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.555 11:28:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:09.555 11:28:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.555 11:28:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:09.555 [2024-07-12 11:28:35.576009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.555 11:28:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.555 11:28:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:09.555 11:28:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.555 11:28:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:09.555 [2024-07-12 11:28:35.585235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.555 [2024-07-12 11:28:35.585615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.555 [2024-07-12 11:28:35.585643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4ac0 with addr=10.0.0.2, port=4420 00:24:09.555 [2024-07-12 11:28:35.585659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4ac0 is same with the state(5) to be set 00:24:09.555 [2024-07-12 11:28:35.585881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4ac0 (9): Bad file descriptor 00:24:09.555 [2024-07-12 11:28:35.586099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.555 [2024-07-12 11:28:35.586120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.555 [2024-07-12 11:28:35.586134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.555 11:28:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.555 11:28:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:09.555 11:28:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.555 [2024-07-12 11:28:35.589423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.555 11:28:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:09.555 [2024-07-12 11:28:35.593008] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:09.555 11:28:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.555 11:28:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 676940 00:24:09.555 [2024-07-12 11:28:35.598743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.555 [2024-07-12 11:28:35.630702] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:19.520 00:24:19.520 Latency(us) 00:24:19.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.520 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:19.520 Verification LBA range: start 0x0 length 0x4000 00:24:19.520 Nvme1n1 : 15.01 6797.18 26.55 10169.65 0.00 7521.25 843.47 19903.53 00:24:19.520 =================================================================================================================== 00:24:19.520 Total : 6797.18 26.55 10169.65 0.00 7521.25 843.47 19903.53 00:24:19.520 11:28:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:24:19.520 11:28:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:19.520 11:28:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.520 11:28:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:19.520 11:28:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.520 11:28:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:24:19.520 11:28:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:24:19.520 11:28:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:19.520 11:28:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:24:19.520 11:28:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:19.520 11:28:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:24:19.520 11:28:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:19.520 11:28:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:19.520 rmmod nvme_tcp 00:24:19.520 rmmod nvme_fabrics 00:24:19.520 rmmod nvme_keyring 00:24:19.520 11:28:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:19.520 11:28:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:24:19.520 11:28:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:24:19.520 11:28:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 677584 ']' 00:24:19.520 11:28:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 677584 00:24:19.520 11:28:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 677584 ']' 00:24:19.520 11:28:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 677584 00:24:19.520 11:28:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:24:19.520 11:28:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:19.520 11:28:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 677584 00:24:19.520 11:28:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:19.520 11:28:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:19.520 11:28:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 677584' 00:24:19.520 killing process with pid 677584 00:24:19.520 11:28:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 677584 00:24:19.520 11:28:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 677584 00:24:19.520 11:28:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:19.520 11:28:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:19.520 11:28:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:19.520 11:28:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:19.520 11:28:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:19.520 11:28:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.520 11:28:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:19.520 11:28:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.425 11:28:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:21.425 00:24:21.425 real 0m22.652s 00:24:21.425 user 1m0.743s 00:24:21.425 sys 0m4.201s 00:24:21.425 11:28:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:21.425 11:28:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:21.425 ************************************ 00:24:21.425 END TEST nvmf_bdevperf 00:24:21.425 ************************************ 00:24:21.425 11:28:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:21.425 11:28:47 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:21.425 11:28:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:21.425 11:28:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:21.425 11:28:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:21.425 ************************************ 00:24:21.425 START TEST nvmf_target_disconnect 00:24:21.425 ************************************ 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:21.425 * Looking for test storage... 00:24:21.425 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:24:21.425 11:28:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:23.954 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:23.954 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:23.954 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:23.954 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:23.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:23.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:24:23.954 00:24:23.954 --- 10.0.0.2 ping statistics --- 00:24:23.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.954 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:23.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:23.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:24:23.954 00:24:23.954 --- 10.0.0.1 ping statistics --- 00:24:23.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.954 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:23.954 ************************************ 00:24:23.954 START TEST nvmf_target_disconnect_tc1 00:24:23.954 ************************************ 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:23.954 11:28:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:24:23.955 11:28:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:23.955 11:28:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:23.955 11:28:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:23.955 11:28:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:23.955 11:28:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:23.955 11:28:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:23.955 11:28:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:23.955 11:28:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:23.955 11:28:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:24:23.955 11:28:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:23.955 EAL: No free 2048 kB hugepages reported on node 1 00:24:23.955 [2024-07-12 11:28:49.802896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.955 [2024-07-12 11:28:49.802974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcf31a0 with addr=10.0.0.2, port=4420 00:24:23.955 [2024-07-12 11:28:49.803008] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:23.955 [2024-07-12 11:28:49.803042] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:23.955 [2024-07-12 11:28:49.803055] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:24:23.955 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:24:23.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:24:23.955 Initializing NVMe Controllers 00:24:23.955 11:28:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:24:23.955 11:28:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:23.955 11:28:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:23.955 11:28:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:23.955 00:24:23.955 real 0m0.089s 00:24:23.955 user 0m0.043s 00:24:23.955 sys 0m0.046s 00:24:23.955 11:28:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:23.955 11:28:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:23.955 ************************************ 00:24:23.955 END TEST nvmf_target_disconnect_tc1 00:24:23.955 ************************************ 00:24:23.955 11:28:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:24:23.955 11:28:49 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:24:23.955 11:28:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:23.955 11:28:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:23.955 11:28:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:23.955 ************************************ 00:24:23.955 START TEST nvmf_target_disconnect_tc2 00:24:23.955 ************************************ 00:24:23.955 11:28:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:24:23.955 11:28:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:24:23.955 11:28:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:23.955 11:28:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:23.955 11:28:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:23.955 11:28:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:23.955 11:28:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=680666 00:24:23.955 11:28:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:23.955 11:28:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 680666 00:24:23.955 11:28:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 680666 ']' 00:24:23.955 11:28:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.955 11:28:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:23.955 11:28:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:23.955 11:28:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:23.955 11:28:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:23.955 [2024-07-12 11:28:49.891165] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:24:23.955 [2024-07-12 11:28:49.891275] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:23.955 EAL: No free 2048 kB hugepages reported on node 1 00:24:23.955 [2024-07-12 11:28:49.957070] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:23.955 [2024-07-12 11:28:50.080317] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:23.955 [2024-07-12 11:28:50.080366] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:23.955 [2024-07-12 11:28:50.080395] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:23.955 [2024-07-12 11:28:50.080407] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:23.955 [2024-07-12 11:28:50.080417] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:23.955 [2024-07-12 11:28:50.080731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:24:23.955 [2024-07-12 11:28:50.080764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:24:23.955 [2024-07-12 11:28:50.080822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:24:23.955 [2024-07-12 11:28:50.080825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:24.212 11:28:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:24.212 11:28:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:24:24.212 11:28:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:24.212 11:28:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:24.212 11:28:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:24.212 11:28:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:24.212 11:28:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:24.212 11:28:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.212 11:28:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:24.212 Malloc0 00:24:24.212 11:28:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.212 11:28:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:24.212 11:28:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.212 11:28:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:24.212 [2024-07-12 11:28:50.255616] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:24.212 11:28:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.212 11:28:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:24.212 11:28:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.213 11:28:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:24.213 11:28:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.213 11:28:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:24.213 11:28:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.213 11:28:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:24.213 11:28:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.213 11:28:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:24.213 11:28:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.213 11:28:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:24.213 [2024-07-12 11:28:50.283837] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:24.213 11:28:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.213 11:28:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:24.213 11:28:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.213 11:28:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:24.213 11:28:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.213 11:28:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=680693 00:24:24.213 11:28:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:24.213 11:28:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:24:24.213 EAL: No free 2048 kB hugepages reported on node 1 00:24:26.770 11:28:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 680666 00:24:26.770 11:28:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Write completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Write completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Write completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Write completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Write completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Write completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Write completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Write completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Write completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Write completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Write completed with error (sct=0, sc=8) 00:24:26.770 [2024-07-12 11:28:52.308030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:26.770 starting I/O failed 00:24:26.770 Write completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Write completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Write completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Write completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Write completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Write completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Write completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Write completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Write completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Write completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Write completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Write completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Write completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 [2024-07-12 11:28:52.308357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Write completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Write completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Write completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Write completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Write completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Write completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Write completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Write completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Write completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Write completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Write completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Write completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 [2024-07-12 11:28:52.308673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.770 starting I/O failed 00:24:26.770 Read completed with error (sct=0, sc=8) 00:24:26.771 starting I/O failed 00:24:26.771 Read completed with error (sct=0, sc=8) 00:24:26.771 starting I/O failed 00:24:26.771 Read completed with error (sct=0, sc=8) 00:24:26.771 starting I/O failed 00:24:26.771 Read completed with error (sct=0, sc=8) 00:24:26.771 starting I/O failed 00:24:26.771 Read completed with error (sct=0, sc=8) 00:24:26.771 starting I/O failed 00:24:26.771 Read completed with error (sct=0, sc=8) 00:24:26.771 starting I/O failed 00:24:26.771 Read completed with error (sct=0, sc=8) 00:24:26.771 starting I/O failed 00:24:26.771 Read completed with error (sct=0, sc=8) 00:24:26.771 starting I/O failed 00:24:26.771 Write completed with error (sct=0, sc=8) 00:24:26.771 starting I/O failed 00:24:26.771 Write completed with error (sct=0, sc=8) 00:24:26.771 starting I/O failed 00:24:26.771 Write completed with error (sct=0, sc=8) 00:24:26.771 starting I/O failed 00:24:26.771 Write completed with error (sct=0, sc=8) 00:24:26.771 starting I/O failed 00:24:26.771 Read completed with error (sct=0, sc=8) 00:24:26.771 starting I/O failed 00:24:26.771 Write completed with error (sct=0, sc=8) 00:24:26.771 starting I/O failed 00:24:26.771 Write completed with error (sct=0, sc=8) 00:24:26.771 starting I/O failed 00:24:26.771 Write completed with error (sct=0, sc=8) 00:24:26.771 starting I/O failed 00:24:26.771 Write completed with error (sct=0, sc=8) 00:24:26.771 starting I/O failed 00:24:26.771 Read completed with error (sct=0, sc=8) 00:24:26.771 starting I/O failed 00:24:26.771 Read completed with error (sct=0, sc=8) 00:24:26.771 starting I/O failed 00:24:26.771 Write completed with error (sct=0, sc=8) 00:24:26.771 starting I/O failed 00:24:26.771 Read completed with error (sct=0, sc=8) 00:24:26.771 starting I/O failed 00:24:26.771 Write completed with error (sct=0, sc=8) 00:24:26.771 starting I/O failed 00:24:26.771 Read completed with error (sct=0, sc=8) 00:24:26.771 starting I/O failed 00:24:26.771 Write completed with error (sct=0, sc=8) 00:24:26.771 starting I/O failed 00:24:26.771 Read completed with error (sct=0, sc=8) 00:24:26.771 starting I/O failed 00:24:26.771 Write completed with error (sct=0, sc=8) 00:24:26.771 starting I/O failed 00:24:26.771 Read completed with error (sct=0, sc=8) 00:24:26.771 starting I/O failed 00:24:26.771 Write completed with error (sct=0, sc=8) 00:24:26.771 starting I/O failed 00:24:26.771 Read completed with error (sct=0, sc=8) 00:24:26.771 starting I/O failed 00:24:26.771 Read completed with error (sct=0, sc=8) 00:24:26.771 starting I/O failed 00:24:26.771 Write completed with error (sct=0, sc=8) 00:24:26.771 starting I/O failed 00:24:26.771 [2024-07-12 11:28:52.308970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.771 [2024-07-12 11:28:52.309081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.309114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.309285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.309312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.309477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.309505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.309657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.309684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.309836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.309862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.309976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.310003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.310098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.310125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.310269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.310295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.310393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.310421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.310560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.310587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.310743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.310769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.310852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.310886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.311039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.311071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.311184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.311225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.311352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.311379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.311576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.311608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.311691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.311717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.311809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.311835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.311957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.311984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.312071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.312097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.312185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.312210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.312329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.312355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.312444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.312470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.312574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.312599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.312684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.312713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.312845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.312893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.313020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.313049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.313200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.313227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.313346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.313373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.313464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.313490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.313570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.313598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.313716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.313742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.313903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.313944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.314070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.314098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.314180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.314206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.314318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.314344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.314436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.314461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.314547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.314573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.314664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.314692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.314806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.314833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.314957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.314984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.315074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.315100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.315213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.315246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.315323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.315348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.315473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.315501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.315619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.315645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.315730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.315756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.315852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.315885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.315981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.316006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.316095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.316119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.316213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.316240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.316361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.316387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.316497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.316523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.316639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.316665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.771 [2024-07-12 11:28:52.316748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.771 [2024-07-12 11:28:52.316775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.771 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.316860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.316921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.317019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.317045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.317185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.317211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.317474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.317499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.317651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.317676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.317761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.317787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.317874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.317901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.318015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.318041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.318125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.318151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.318283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.318318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.318415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.318441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.318554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.318580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.318664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.318690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.318803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.318833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.318954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.318999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.319164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.319205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.319301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.319329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.319468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.319494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.319615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.319641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.319763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.319788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.319904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.319931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.320021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.320048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.320135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.320161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.320278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.320304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.320388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.320414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.320535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.320565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.320690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.320719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.320809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.320839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.320999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.321026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.321127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.321167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.321255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.321282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.321395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.321422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.321539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.321565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.321709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.321736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.321855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.321892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.322021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.322048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.322133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.322159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.322270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.322296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.322384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.322411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.322558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.322587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.322666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.322693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.322806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.322837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.322963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.322990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.323076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.323102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.323221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.323247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.323374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.323423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.323537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.323563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.323654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.323682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.323768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.323794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.323904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.323931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.324038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.324065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.324204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.324230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.324346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.324373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.324464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.324491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.324619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.324660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.324775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.772 [2024-07-12 11:28:52.324816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.772 qpair failed and we were unable to recover it. 00:24:26.772 [2024-07-12 11:28:52.324954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.773 [2024-07-12 11:28:52.324981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.773 qpair failed and we were unable to recover it. 00:24:26.773 [2024-07-12 11:28:52.325073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.773 [2024-07-12 11:28:52.325100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.773 qpair failed and we were unable to recover it. 00:24:26.773 [2024-07-12 11:28:52.325181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.773 [2024-07-12 11:28:52.325206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.773 qpair failed and we were unable to recover it. 00:24:26.773 [2024-07-12 11:28:52.325318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.773 [2024-07-12 11:28:52.325343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.773 qpair failed and we were unable to recover it. 00:24:26.773 [2024-07-12 11:28:52.325451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.773 [2024-07-12 11:28:52.325477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.773 qpair failed and we were unable to recover it. 00:24:26.773 [2024-07-12 11:28:52.325575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.773 [2024-07-12 11:28:52.325604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.773 qpair failed and we were unable to recover it. 00:24:26.773 [2024-07-12 11:28:52.325719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.773 [2024-07-12 11:28:52.325748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.773 qpair failed and we were unable to recover it. 00:24:26.773 [2024-07-12 11:28:52.325874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.773 [2024-07-12 11:28:52.325902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.773 qpair failed and we were unable to recover it. 00:24:26.773 [2024-07-12 11:28:52.326021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.773 [2024-07-12 11:28:52.326047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.773 qpair failed and we were unable to recover it. 00:24:26.773 [2024-07-12 11:28:52.326162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.773 [2024-07-12 11:28:52.326188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.773 qpair failed and we were unable to recover it. 00:24:26.773 [2024-07-12 11:28:52.326280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.773 [2024-07-12 11:28:52.326306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.773 qpair failed and we were unable to recover it. 00:24:26.773 [2024-07-12 11:28:52.326425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.773 [2024-07-12 11:28:52.326453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.773 qpair failed and we were unable to recover it. 00:24:26.773 [2024-07-12 11:28:52.326548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.773 [2024-07-12 11:28:52.326579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.773 qpair failed and we were unable to recover it. 00:24:26.773 [2024-07-12 11:28:52.326680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.773 [2024-07-12 11:28:52.326707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.773 qpair failed and we were unable to recover it. 00:24:26.773 [2024-07-12 11:28:52.326822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.773 [2024-07-12 11:28:52.326848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.773 qpair failed and we were unable to recover it. 00:24:26.773 [2024-07-12 11:28:52.326972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.773 [2024-07-12 11:28:52.326999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.773 qpair failed and we were unable to recover it. 00:24:26.773 [2024-07-12 11:28:52.327086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.773 [2024-07-12 11:28:52.327113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.773 qpair failed and we were unable to recover it. 00:24:26.773 [2024-07-12 11:28:52.327196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.773 [2024-07-12 11:28:52.327223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.773 qpair failed and we were unable to recover it. 00:24:26.773 [2024-07-12 11:28:52.327313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.773 [2024-07-12 11:28:52.327341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.327450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.327476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.327589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.327616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.327700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.327727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.327877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.327904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.328036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.328076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.328205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.328245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.328370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.328398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.328550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.328577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.328728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.328754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.328862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.328912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.329062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.329090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.329185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.329213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.329302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.329329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.329445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.329471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.329616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.329642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.329782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.329808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.329935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.329965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.330057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.330083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.330197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.330223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.330341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.330368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.330488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.330515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.330659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.330685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.330811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.330840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.331013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.331052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.331152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.331180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.331325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.331351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.331474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.331500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.331646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.331700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.331839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.331872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.331989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.332015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.332100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.332126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.332270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.332296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.332416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.332443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.332557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.332591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.332710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.332738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.332872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.332913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.333009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.333037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.333152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.333180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.333298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.333324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.333408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.333434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.333543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.333568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.333655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.333682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.333791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.333817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.333973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.334002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.334128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.334155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.334267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.334295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.334383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.334411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.334561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.334588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.334727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.334753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.334876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.334905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.335030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.335059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.335195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.335235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.335358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.335385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.335561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.335612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.335758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.335784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.335875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.335901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.335999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.336025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.336140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.336166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.774 [2024-07-12 11:28:52.336251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.774 [2024-07-12 11:28:52.336278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.774 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.336394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.336422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.336539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.336574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.336745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.336785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.336878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.336906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.337048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.337075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.337193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.337218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.337307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.337336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.337456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.337482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.337633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.337661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.337801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.337827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.337952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.337979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.338079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.338105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.338226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.338253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.338343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.338369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.338485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.338512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.338671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.338712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.338813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.338840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.338993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.339020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.339137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.339163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.339275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.339301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.339406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.339432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.339578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.339606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.339712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.339752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.339856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.339890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.340000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.340026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.340143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.340170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.340282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.340308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.340387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.340413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.340500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.340528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.340654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.340681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.340828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.340857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.340957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.340983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.341073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.341099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.341211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.341237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.341333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.341361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.341490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.341521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.341635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.341664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.341777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.341804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.341895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.341923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.342043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.342070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.342187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.342213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.342354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.342385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.342502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.342530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.342648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.342675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.342802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.342830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.342949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.342976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.343062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.343088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.343166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.343192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.343307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.343333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.343416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.343441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.343525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.343552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.343645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.343673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.775 [2024-07-12 11:28:52.343817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.775 [2024-07-12 11:28:52.343846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.775 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.343978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.344005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.344121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.344148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.344269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.344296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.344413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.344439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.344615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.344670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.344758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.344786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.344907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.344948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.345102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.345130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.345240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.345267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.345407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.345434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.345570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.345616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.345745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.345773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.345887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.345927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.346074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.346103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.346209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.346236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.346360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.346392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.346563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.346612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.346758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.346784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.346907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.346935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.347053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.347079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.347173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.347200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.347310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.347337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.347436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.347461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.347550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.347576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.347692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.347719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.347806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.347833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.347952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.347979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.348094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.348120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.348203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.348230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.348341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.348367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.348455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.348483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.348592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.348618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.348737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.348763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.348882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.348912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.349013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.349053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.349174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.349203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.349320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.349347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.349463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.349490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.349577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.349604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.349691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.349717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.349798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.349825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.349940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.349980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.350081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.350108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.350201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.350227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.350315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.350342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.350455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.350481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.350567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.350594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.350705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.350732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.350849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.350881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.351002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.351028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.351123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.351151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.351234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.351260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.351367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.351394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.351506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.351534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.351631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.351658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.351736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.351762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.351859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.351891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.352013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.352038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.352129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.352155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.352235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.352261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.776 qpair failed and we were unable to recover it. 00:24:26.776 [2024-07-12 11:28:52.352411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.776 [2024-07-12 11:28:52.352437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.352581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.352608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.352723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.352749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.352872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.352898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.352988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.353015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.353130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.353156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.353240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.353266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.353388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.353414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.353533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.353561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.353684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.353711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.353854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.353892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.354007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.354033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.354128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.354153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.354242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.354268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.354457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.354516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.354604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.354630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.354740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.354766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.354858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.354893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.355015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.355041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.355170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.355210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.355306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.355334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.355464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.355504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.355627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.355659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.355779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.355805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.355888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.355916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.356009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.356035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.356119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.356145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.356272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.356298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.356387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.356414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.356503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.356531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.356614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.356642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.356757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.356783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.356894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.356921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.357004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.357030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.357110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.357135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.357228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.357255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.357342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.357369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.357454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.357483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.357576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.357603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.357684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.357711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.357798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.357826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.357947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.357975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.358083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.358110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.358188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.358214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.358355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.358381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.358492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.358518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.358635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.358662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.358744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.358772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.358860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.358896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.358983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.359016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.359113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.359140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.359217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.359243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.359371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.359412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.359507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.359534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.359628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.359656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.359745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.359773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.359864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.359895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.777 [2024-07-12 11:28:52.359981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.777 [2024-07-12 11:28:52.360008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.777 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.360128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.360154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.360243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.360270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.360386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.360413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.360541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.360567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.360694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.360720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.360808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.360835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.360926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.360956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.361067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.361107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.361225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.361252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.361341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.361367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.361547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.361591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.361731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.361757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.361879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.361906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.362015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.362044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.362172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.362211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.362326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.362390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.362552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.362614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.362725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.362750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.362843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.362882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.362971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.362997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.363106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.363133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.363301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.363328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.363503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.363556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.363650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.363675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.363818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.363849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.363992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.364031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.364161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.364189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.364360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.364411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.364581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.364634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.364776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.364802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.364921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.364950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.365097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.365127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.365266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.365292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.365405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.365432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.365658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.365716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.365854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.365885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.366000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.366025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.366113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.366138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.366273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.366299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.366400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.366426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.366536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.366561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.366703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.366729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.366851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.366882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.366975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.367001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.367101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.367126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.367279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.367305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.367453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.367480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.367589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.367614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.367755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.367781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.367897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.367923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.368062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.368087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.368165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.368191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.368331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.368358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.368473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.368499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.778 [2024-07-12 11:28:52.368619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.778 [2024-07-12 11:28:52.368645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.778 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.368786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.368812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.368927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.368954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.369038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.369064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.369196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.369235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.369367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.369395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.369539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.369565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.369655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.369681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.369806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.369832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.369966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.369993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.370134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.370162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.370301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.370328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.370449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.370475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.370595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.370621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.370708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.370734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.370849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.370888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.371026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.371052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.371143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.371169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.371288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.371315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.371428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.371454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.371591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.371618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.371727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.371753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.371897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.371925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.372025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.372051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.372160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.372186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.372296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.372322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.372428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.372454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.372529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.372555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.372693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.372719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.372881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.372922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.373038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.373065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.373209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.373238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.373468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.373524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.373615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.373643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.373749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.373775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.373890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.373917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.374028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.374054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.374172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.374199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.374308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.374333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.374442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.374469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.374613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.374639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.374749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.374778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.374903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.374930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.375064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.375090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.375206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.375237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.375351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.375378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.375465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.375491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.375611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.375638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.375726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.375753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.375863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.375897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.376016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.376042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.376135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.376161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.376274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.376300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.376415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.376443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.376563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.779 [2024-07-12 11:28:52.376589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.779 qpair failed and we were unable to recover it. 00:24:26.779 [2024-07-12 11:28:52.376693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.376718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.376834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.376860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.376983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.377008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.377141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.377167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.377276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.377302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.377396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.377422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.377519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.377559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.377645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.377673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.377792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.377818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.377912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.377939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.378026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.378052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.378163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.378189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.378272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.378298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.378419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.378445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.378529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.378555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.378668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.378696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.378813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.378843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.378968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.378995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.379102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.379128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.379322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.379377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.379580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.379632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.379746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.379772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.379857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.379891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.379983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.380009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.380093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.380119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.380197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.380223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.380300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.380326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.380436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.380462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.380599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.380625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.380707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.380733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.380818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.380845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.380975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.381015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.381150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.381191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.381286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.381313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.381452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.381479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.381567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.381594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.381672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.381699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.381785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.381813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.381964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.381991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.382102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.382128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.382241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.382267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.382442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.382501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.382640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.382666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.382775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.382805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.382951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.382978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.383101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.383130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.383273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.383327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.780 [2024-07-12 11:28:52.383494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.780 [2024-07-12 11:28:52.383549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.780 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.383665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.383691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.383776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.383802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.383919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.383946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.384059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.384087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.384229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.384255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.384343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.384368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.384455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.384481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.384596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.384622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.384705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.384730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.384827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.384856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.384945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.384972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.385065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.385091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.385215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.385241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.385422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.385484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.385697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.385753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.385874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.385903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.386047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.386074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.386168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.386193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.386381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.386433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.386622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.386672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.386812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.386837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.386959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.386985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.387096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.387127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.387238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.387265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.387386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.387413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.387551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.387576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.387694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.387719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.387809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.387835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.387919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.387945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.388060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.388088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.388229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.388255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.388336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.388362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.388499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.388525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.388640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.388667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.388769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.388810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.388962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.388992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.389110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.389137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.389246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.389273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.389366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.389392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.389511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.389538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.389654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.389683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.389820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.389845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.389933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.389959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.390044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.390071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.390187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.390213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.390305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.390332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.390470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.390497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.390582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.390609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.390715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.390741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.390836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.390863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.390985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.391012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.391104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.391131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.391241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.391267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.391357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.391385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.391498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.391524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.391644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.391673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.391795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.391821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.391903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.391932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.392076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.392102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.392192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.392218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.392333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.392359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.392530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.392586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.392671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.392702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.392821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.392848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.392949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.392975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.393092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.393118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.781 [2024-07-12 11:28:52.393221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.781 [2024-07-12 11:28:52.393247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.781 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.393360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.393386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.393504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.393530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.393634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.393660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.393799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.393827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.393923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.393952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.394093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.394120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.394203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.394229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.394344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.394371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.394459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.394524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.394733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.394760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.394878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.394906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.395022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.395049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.395128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.395155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.395281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.395333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.395527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.395579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.395745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.395773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.395913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.395940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.396029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.396056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.396144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.396170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.396256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.396283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.396404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.396431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.396519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.396545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.396658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.396685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.396793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.396832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.396934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.396962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.397076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.397102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.397216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.397243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.397482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.397554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.397793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.397845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.398015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.398043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.398160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.398187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.398274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.398301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.398438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.398464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.398574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.398601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.398689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.398715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.398856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.398892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.399033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.399060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.399175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.399201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.399287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.399314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.399488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.399549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.399632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.399658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.399798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.399824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.399939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.399966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.400048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.400074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.400165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.400191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.400329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.400356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.400478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.400504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.400645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.400672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.400794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.400821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.400952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.400979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.401100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.401127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.401243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.401269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.401358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.401384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.401469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.401495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.401609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.401649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.401778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.401805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.401952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.401979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.402098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.402124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.402233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.402259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.402344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.782 [2024-07-12 11:28:52.402370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.782 qpair failed and we were unable to recover it. 00:24:26.782 [2024-07-12 11:28:52.402454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.402483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.402565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.402591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.402694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.402740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.402839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.402876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.402992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.403019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.403106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.403132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.403322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.403387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.403544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.403601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.403747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.403772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.403892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.403919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.404034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.404061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.404265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.404318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.404513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.404589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.404747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.404800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.404982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.405011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.405248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.405302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.405528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.405581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.405708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.405734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.405881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.405907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.406017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.406042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.406150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.406176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.406288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.406354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.406579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.406633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.406753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.406782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.406896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.406926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.407019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.407046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.407230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.407284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.407405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.407456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.407621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.407679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.407797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.407825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.407941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.407970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.408119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.408173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.408429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.408456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.408792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.408845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.408990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.409016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.409153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.409179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.409303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.409398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.409642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.409694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.409878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.409906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.410022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.410050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.410213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.410265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.410513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.410539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.410843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.410929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.411020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.411046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.411192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.411218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.411331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.411392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.411592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.411644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.411821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.411848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.411976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.412003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.412126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.412176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.412421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.412472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.412736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.412788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.412980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.413008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.413129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.413156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.413297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.413364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.413523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.413576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.413798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.413850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.414052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.414092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.414191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.414219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.414330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.414356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.414463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.414489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.414600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.414626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.414714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.414741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.414858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.414890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.414988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.415016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.415102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.415127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.415244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.783 [2024-07-12 11:28:52.415272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.783 qpair failed and we were unable to recover it. 00:24:26.783 [2024-07-12 11:28:52.415387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.415414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.415567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.415606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.415707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.415734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.415816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.415841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.415941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.415967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.416050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.416076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.416159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.416186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.416272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.416300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.416444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.416474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.416568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.416595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.416715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.416741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.416855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.416888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.416972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.416998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.417090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.417118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.417201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.417230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.417340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.417366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.417515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.417542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.417657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.417683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.417824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.417850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.417954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.417981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.418097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.418124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.418352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.418425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.418672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.418724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.418920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.418948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.419061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.419088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.419199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.419225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.419338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.419365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.419483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.419509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.419603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.419630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.419729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.419757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.419877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.419904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.420023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.420051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.420166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.420193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.420306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.420333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.420450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.420476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.420685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.420737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.420965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.420993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.421106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.421133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.421215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.421243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.421412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.421464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.421607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.421668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.421862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.421903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.422019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.422050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.422166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.422192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.422307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.422334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.422445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.422471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.422560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.422586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.422699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.422728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.422822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.422848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.423012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.423051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.423145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.423172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.423285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.423311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.423421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.423447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.423634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.423689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.423799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.423825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.423950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.423979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.424105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.424132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.424248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.424277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.424359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.424385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.424528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.424555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.424697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.424724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.424839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.424873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.424997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.425026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.425145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.425171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.425259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.425285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.425375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.425402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.425493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.784 [2024-07-12 11:28:52.425519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.784 qpair failed and we were unable to recover it. 00:24:26.784 [2024-07-12 11:28:52.425609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.425636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.425726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.425754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.425887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.425927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.426050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.426078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.426197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.426223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.426365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.426391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.426497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.426523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.426707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.426776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.426896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.426925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.427039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.427066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.427148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.427174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.427383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.427464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.427618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.427672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.427887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.427936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.428028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.428055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.428170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.428202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.428360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.428413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.428626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.428678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.428829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.428855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.428978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.429006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.429101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.429128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.429234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.429261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.429346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.429372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.429549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.429601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.429744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.429771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.429862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.429904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.430021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.430047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.430166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.430192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.430271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.430297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.430405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.430457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.430649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.430699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.430946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.430973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.431055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.431082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.431227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.431253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.431448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.431511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.431750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.431817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.432012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.432039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.432127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.432153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.432397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.432460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.432731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.432795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.433026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.433052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.433136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.433163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.433253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.433279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.433367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.433393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.433528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.433593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.433896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.433948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.434043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.434067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.434236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.434299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.434525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.434588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.434929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.434956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.435041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.435067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.435312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.435374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.435695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.435759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.435979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.436031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.436287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.436350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.436672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.436745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.437023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.437075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.437261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.437325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.437641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.437705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.437975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.438027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.785 qpair failed and we were unable to recover it. 00:24:26.785 [2024-07-12 11:28:52.438299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.785 [2024-07-12 11:28:52.438364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.438648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.438712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.438976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.439029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.439257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.439320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.439639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.439702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.439912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.439985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.440208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.440260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.440553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.440617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.440886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.440960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.441137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.441216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.441547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.441610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.441822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.441924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.442140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.442208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.442477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.442528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.442746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.442809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.443046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.443116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.443412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.443475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.443725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.443789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.444098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.444164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.444462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.444526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.444820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.444899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.445197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.445261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.445531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.445596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.445846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.445927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.446218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.446282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.446575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.446638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.446936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.447001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.447257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.447322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.447549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.447613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.447861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.447939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.448246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.448309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.448616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.448681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.448988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.449054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.449341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.449405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.449706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.449769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.450054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.450138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.450383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.450446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.450656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.450721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.450963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.451029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.451276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.451341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.451635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.451698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.451912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.451979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.452270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.452334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.452580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.452643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.452926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.452991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.453210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.453275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.453511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.453574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.453831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.453909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.454213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.454277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.454591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.454654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.454917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.454984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.455273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.455336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.455618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.455681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.455944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.456009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.456307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.456371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.456625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.456690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.456940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.457005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.457290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.457354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.457649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.457713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.457962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.458027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.458322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.458386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.458640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.458704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.458962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.459030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.459344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.459409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.459681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.459745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.460044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.460109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.786 [2024-07-12 11:28:52.460416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.786 [2024-07-12 11:28:52.460481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.786 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.460697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.460760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.461011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.461077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.461336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.461400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.461650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.461713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.462010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.462076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.462386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.462451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.462744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.462807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.463127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.463193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.463495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.463570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.463860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.463942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.464232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.464296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.464556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.464620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.464916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.464984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.465292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.465357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.465648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.465712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.465982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.466047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.466309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.466374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.466627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.466693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.466954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.467020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.467306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.467371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.467674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.467737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.468042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.468107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.468338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.468406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.468665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.468728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.469026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.469092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.469412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.469477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.469739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.469803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.470023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.470056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.470187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.470217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.470381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.470428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.470566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.470610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.470748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.470779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.471042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.471094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.471254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.471285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.471390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.471421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.471612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.471671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.471877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.471908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.472044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.472105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.472306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.472336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.472485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.472518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.472654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.472714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.472946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.472995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.473205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.473236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.473373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.473434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.473624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.473673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.473843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.473913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.474102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.474149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.474351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.474399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.474587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.474634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.474833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.474894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.475123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.475172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.475369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.475416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.475566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.475613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.475800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.475846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.476060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.476107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.476302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.476350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.476506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.476554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.476837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.476918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.477133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.477183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.477381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.477427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.477590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.477638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.477821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.477882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.478093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.478140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.478289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.478335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.478526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.478573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.478790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.478837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.479076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.787 [2024-07-12 11:28:52.479123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.787 qpair failed and we were unable to recover it. 00:24:26.787 [2024-07-12 11:28:52.479296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.479343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.479492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.479540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.479701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.479747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.479922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.479970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.480208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.480256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.480402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.480449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.480703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.480751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.480947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.480996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.481192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.481247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.481438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.481487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.481705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.481756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.482010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.482061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.482271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.482324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.482526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.482577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.482788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.482840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.483052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.483103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.483279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.483332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.483531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.483584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.483783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.483834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.484066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.484118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.484308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.484358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.484592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.484642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.484851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.484916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.485153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.485203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.485446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.485496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.485662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.485712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.485917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.485968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.486207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.486257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.486502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.486552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.486747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.486797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.487011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.487062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.487266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.487316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.487528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.487579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.487777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.487830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.488101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.488153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.488356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.488407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.488619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.488669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.488875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.488927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.489145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.489195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.489374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.489423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.489628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.489677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.489891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.489942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.490134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.490184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.490371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.490421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.490607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.490656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.490880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.490932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.491138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.491188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.491386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.491438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.491649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.491707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.491909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.491960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.492139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.492190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.492387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.492436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.492643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.492693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.492881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.492932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.493136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.493186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.493424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.493473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.493678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.493727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.493920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.493972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.494172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.494222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.494430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.494479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.494640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.494690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.788 [2024-07-12 11:28:52.494888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.788 [2024-07-12 11:28:52.494939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.788 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.495185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.495236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.495389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.495441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.495599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.495649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.495824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.495903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.496108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.496161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.496396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.496446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.496645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.496695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.496898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.496950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.497188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.497238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.497475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.497525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.497721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.497773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.498024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.498075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.498279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.498329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.498537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.498587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.498828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.498888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.499070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.499120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.499292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.499343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.499582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.499632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.499807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.499858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.500076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.500126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.500328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.500379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.500613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.500663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.500879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.500952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.501138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.501193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.501412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.501466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.501723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.501777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.501972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.502037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.502302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.502356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.502610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.502664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.502842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.502910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.503099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.503155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.503365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.503420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.503584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.503638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.503808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.503863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.504180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.504235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.504491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.504544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.504798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.504851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.505096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.505150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.505400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.505454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.505710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.505763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.506012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.506068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.506281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.506337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.506597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.506651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.506860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.506928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.507139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.507193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.507406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.507460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.507677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.507730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.507986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.508042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.508292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.508346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.508600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.508654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.508907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.508963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.509143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.509197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.509410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.509467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.509736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.509791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.510027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.510082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.510289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.510343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.510587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.510641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.510849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.510913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.511171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.511225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.511436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.511489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.511698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.511752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.511963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.512019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.512252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.512306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.512487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.512540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.512765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.512819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.513051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.513105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.513354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.789 [2024-07-12 11:28:52.513420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.789 qpair failed and we were unable to recover it. 00:24:26.789 [2024-07-12 11:28:52.513597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.513651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.513816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.513881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.514094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.514150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.514414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.514467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.514687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.514741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.514973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.515029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.515206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.515259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.515512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.515565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.515777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.515830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.516083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.516138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.516395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.516450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.516612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.516665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.516920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.516976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.517229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.517284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.517459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.517513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.517688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.517743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.517954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.518009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.518262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.518315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.518483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.518536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.518789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.518842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.519073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.519127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.519385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.519438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.519691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.519744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.519918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.519973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.520235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.520288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.520470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.520524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.520701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.520755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.520974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.521030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.521255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.521309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.521519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.521572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.521791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.521844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.522124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.522178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.522408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.522462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.522671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.522724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.522977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.523031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.523284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.523338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.523557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.523611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.523877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.523931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.524192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.524245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.524460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.524524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.524731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.524783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.525017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.525072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.525249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.525305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.525567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.525621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.525891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.525946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.526197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.526252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.526460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.526514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.526718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.526772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.526972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.527033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.527232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.527290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.527518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.527576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.527840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.527933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.528138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.528197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.528412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.528472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.528675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.528733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.528969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.529030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.529225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.529287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.529550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.529608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.529891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.529952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.530171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.530229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.530396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.530454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.790 qpair failed and we were unable to recover it. 00:24:26.790 [2024-07-12 11:28:52.530632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.790 [2024-07-12 11:28:52.530692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.530929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.530988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.531217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.531275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.531557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.531615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.531896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.531956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.532160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.532219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.532457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.532515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.532796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.532853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.533080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.533139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.533413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.533471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.533702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.533760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.533981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.534042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.534274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.534334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.534574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.534633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.534822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.534895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.535092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.535150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.535353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.535413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.535664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.535722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.535960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.536031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.536269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.536327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.536601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.536658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.536896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.536958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.537223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.537282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.537481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.537538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.537798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.537855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.538113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.538172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.538402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.538460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.538706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.538764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.539004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.539065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.539303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.539361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.539620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.539679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.539899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.539960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.540241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.540299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.540517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.540575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.540838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.540908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.541110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.541169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.541393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.541454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.541694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.541754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.541946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.542005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.542250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.542308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.542568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.542626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.542861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.542930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.543120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.543177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.543410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.543467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.543708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.543766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.544052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.544113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.544346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.544404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.544676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.544734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.544922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.544985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.545213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.545273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.545505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.545564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.545784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.545843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.546095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.546154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.546385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.546446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.546643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.546700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.546953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.547018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.547275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.547339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.547623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.547686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.547972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.548046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.548300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.548366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.548629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.548692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.548940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.549001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.549242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.549301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.549526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.549584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.549857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.549927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.791 [2024-07-12 11:28:52.550170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.791 [2024-07-12 11:28:52.550228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.791 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.550451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.550509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.550747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.550804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.551046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.551109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.551350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.551414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.551699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.551762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.552032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.552097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.552355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.552419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.552707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.552770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.553082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.553146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.553402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.553466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.553719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.553783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.554018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.554084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.554282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.554345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.554589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.554655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.554940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.555004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.555211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.555274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.555486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.555550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.555841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.555938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.556173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.556236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.556536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.556599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.556857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.556937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.557139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.557205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.557474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.557538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.557783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.557847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.558114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.558180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.558424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.558489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.558698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.558762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.558992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.559057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.559352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.559415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.559687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.559750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.560000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.560066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.560261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.560293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.560432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.560470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.560640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.560673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.560814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.560846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.560991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.561026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.561227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.561292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.561513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.561547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.561716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.561749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.561887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.561939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.562107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.562151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.562332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.562379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.562529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.562574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.562780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.562824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.563029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.563093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.563342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.563387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.563596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.563641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.563814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.563906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.564086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.564133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.564326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.564390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.564641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.564705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.564941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.565007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.565299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.565362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.565610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.565673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.565972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.566035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.566321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.566384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.566645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.566708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.566957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.567021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.567263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.567325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.567585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.567649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.567937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.568001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.568197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.568262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.568516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.568579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.568842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.568924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.569223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.569286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.792 [2024-07-12 11:28:52.569501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.792 [2024-07-12 11:28:52.569567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.792 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.569780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.569844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.570112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.570175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.570389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.570452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.570739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.570802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.571068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.571133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.571377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.571439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.571739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.571817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.572069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.572135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.572381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.572445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.572727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.572791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.573042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.573107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.573398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.573462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.573720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.573783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.574064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.574128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.574432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.574496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.574742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.574805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.575088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.575153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.575411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.575474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.575726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.575788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.576049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.576115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.576346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.576409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.576649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.576713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.576968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.577035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.577287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.577350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.577649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.577712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.577959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.578024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.578287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.578350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.578560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.578623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.578849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.578922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.579145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.579208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.579494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.579557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.579798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.579860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.580182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.580245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.580502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.580566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.580828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.580910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.581124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.581190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.581440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.581503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.581746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.581810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.582073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.582138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.582387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.582449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.582659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.582722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.582935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.583002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.583245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.583308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.583609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.583672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.583893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.583958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.584149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.584212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.584421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.584493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.584788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.584850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.585115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.585181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.585382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.585445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.585638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.585700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.585942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.586006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.586217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.586283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.586534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.586596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.586853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.586935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.587136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.587199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.587490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.587552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.587839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.587942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.793 qpair failed and we were unable to recover it. 00:24:26.793 [2024-07-12 11:28:52.588191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.793 [2024-07-12 11:28:52.588255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.588534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.588599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.588861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.588942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.589204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.589267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.589557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.589620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.589856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.589947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.590159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.590222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.590517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.590579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.590834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.590914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.591132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.591196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.591462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.591524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.591781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.591844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.592131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.592195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.592446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.592510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.592758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.592821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.593083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.593149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.593452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.593515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.593733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.593797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.594085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.594150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.594402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.594466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.594678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.594740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.594951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.595017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.595232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.595295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.595592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.595655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.595882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.595949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.596269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.596333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.596621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.596685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.596940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.597004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.597212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.597285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.597539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.597603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.597858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.597935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.598228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.598291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.598503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.598565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.598812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.598893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.599111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.599174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.599475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.599537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.599786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.599850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.600074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.600138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.600408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.600471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.600680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.600746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.600963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.601029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.601291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.601354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.601609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.601672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.601925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.601990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.602235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.602298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.602542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.602606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.602880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.602944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.603213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.603277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.603471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.603536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.603827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.603905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.604196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.604259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.604508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.604570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.604832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.604912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.605147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.605214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.605460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.605523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.605773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.605836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.606117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.606181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.606435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.606499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.606706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.606769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.607044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.607109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.607400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.607463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.607718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.607781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.608052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.608117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.608419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.608482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.608725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.608788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.794 [2024-07-12 11:28:52.609078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.794 [2024-07-12 11:28:52.609143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.794 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.609400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.609465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.609765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.609828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.610058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.610132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.610389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.610453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.610678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.610741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.611030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.611096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.611355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.611417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.611667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.611731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.611975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.612043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.612262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.612325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.612517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.612581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.612833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.612928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.613179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.613242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.613427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.613491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.613736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.613798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.614058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.614124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.614404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.614467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.614700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.614763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.615023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.615089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.615298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.615362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.615657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.615721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.615931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.615998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.616216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.616282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.616529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.616594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.616841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.616920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.617175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.617238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.617529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.617593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.617895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.617960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.618200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.618263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.618564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.618637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.618857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.618933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.619166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.619230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.619499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.619562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.619800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.619862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.620145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.620208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.620470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.620534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.620751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.620814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.621135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.621200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.621419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.621484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.621792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.621857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.622131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.622197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.622452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.622530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.622770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.622845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.623136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.623211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.623497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.623564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.623810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.623891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.624185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.624247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.624450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.624514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.624754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.624818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.625052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.625115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.625404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.625468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.625688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.625753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.625967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.626032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.626248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.626311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.626611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.626673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.626939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.627004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.627305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.627368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.627654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.627717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.627977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.628042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.628267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.628331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.628626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.628688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.628953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.629018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.795 [2024-07-12 11:28:52.629258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.795 [2024-07-12 11:28:52.629322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.795 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.629551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.629617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.629891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.629956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.630168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.630231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.630477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.630539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.630768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.630832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.631112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.631175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.631417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.631492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.631775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.631839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.632168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.632232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.632444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.632508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.632733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.632796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.633063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.633126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.633323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.633391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.633642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.633703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.633922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.633986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.634198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.634261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.634507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.634568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.634773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.634835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.635104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.635168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.635385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.635447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.635709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.635772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.636038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.636104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.636355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.636419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.636669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.636732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.637034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.637098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.637355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.637418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.637643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.637705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.637955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.638019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.638270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.638333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.638576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.638640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.638928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.638993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.639240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.639303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.639544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.639607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.639907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.639972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.640197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.640260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.640497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.640560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.640753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.640816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.641126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.641190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.641406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.641469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.641672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.641735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.641982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.642047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.642278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.642341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.642649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.642712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.642917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.642981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.643231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.643295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.643523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.643585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.643793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.643879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.644169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.644231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.644490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.644553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.644820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.644909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.645164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.645230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.645455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.645518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.645779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.645843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.646118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.646184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.646434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.646496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.646747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.646810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.647091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.647156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.647416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.647479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.647692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.796 [2024-07-12 11:28:52.647755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.796 qpair failed and we were unable to recover it. 00:24:26.796 [2024-07-12 11:28:52.648039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.648102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.648359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.648422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.648710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.648773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.649083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.649147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.649392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.649455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.649700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.649763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.649986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.650050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.650260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.650322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.650571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.650634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.650896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.650960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.651225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.651288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.651490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.651553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.651803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.651882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.652101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.652166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.652472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.652535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.652774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.652836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.653083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.653146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.653388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.653451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.653675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.653736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.654033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.654098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.654345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.654408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.654606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.654668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.654892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.654955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.655208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.655271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.655564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.655627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.655925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.655989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.656203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.656266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.656535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.656610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.656911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.656976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.657195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.657257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.657509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.657571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.657820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.657898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.658185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.658246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.658487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.658550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.658799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.658861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.659127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.659189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.659486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.659548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.659803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.659883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.660101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.660163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.660414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.660476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.660726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.660787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.661092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.661157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.661373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.661435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.661648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.661711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.661967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.662032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.662251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.662317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.662582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.662645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.662893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.662957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.663211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.663274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.663509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.663572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.663812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.663888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.664178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.664241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.664446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.664508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.664763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.664825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.665076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.665140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.665349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.665412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.665650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.665713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.665918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.665983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.666205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.666267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.666508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.666571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.666809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.666887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.667187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.667251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.667466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.667529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.667821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.667896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.668100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.797 [2024-07-12 11:28:52.668165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.797 qpair failed and we were unable to recover it. 00:24:26.797 [2024-07-12 11:28:52.668416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.668479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.668670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.668733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.668925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.668999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.669226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.669292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.669581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.669645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.669883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.669948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.670229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.670291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.670496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.670559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.670798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.670860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.671077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.671142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.671435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.671499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.671710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.671772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.671994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.672060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.672282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.672345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.672568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.672630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.672918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.672983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.673203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.673269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.673469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.673532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.673758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.673821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.674140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.674204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.674464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.674528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.674777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.674839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.675116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.675180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.675435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.675498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.675773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.675836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.676103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.676166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.676382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.676445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.676690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.676753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.676992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.677056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.677369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.677431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.677685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.677748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.677949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.678017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.678270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.678333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.678571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.678633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.678911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.678976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.679172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.679236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.679436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.679502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.679709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.679772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.680047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.680112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.680421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.680484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.680739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.680802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.681064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.681131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.681381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.681453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.681746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.681809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.682043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.682108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.682397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.682460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.682715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.682778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.683035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.683100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.683389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.683452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.683743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.683806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.684076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.684140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.684383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.684447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.684695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.684762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.685021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.685086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.685376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.685439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.685689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.685752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.685990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.686055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.686253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.686316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.798 [2024-07-12 11:28:52.686574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.798 [2024-07-12 11:28:52.686639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.798 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.686917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.686982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.687202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.687268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.687460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.687524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.687771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.687834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.688063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.688127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.688416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.688481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.688691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.688754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.688959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.689026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.689271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.689335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.689632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.689695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.689967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.690032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.690271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.690333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.690612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.690675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.690914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.690979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.691221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.691283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.691510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.691573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.691815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.691896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.692103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.692169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.692457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.692520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.692770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.692832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.693084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.693148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.693369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.693435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.693690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.693753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.693967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.694042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.694255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.694318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.694530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.694592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.694831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.694909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.695163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.695231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.695494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.695556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.695811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.695889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.696129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.696192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.696437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.696503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.696788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.696851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.697084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.697147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.697413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.697476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.697722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.697785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.698031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.698096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.698328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.698391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.698677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.698740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.698988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.699052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.699247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.699310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.699561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.699628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.699921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.699986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.700255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.700319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.700563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.700626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.700880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.700943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.701194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.701256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.701462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.701525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.701725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.701792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.702058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.702123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.702354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.702416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.702680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.702743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.702964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.703032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.703319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.703382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.703588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.703653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.703901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.703966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.704199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.704262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.704481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.704546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.704809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.704885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.705108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.705171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.705461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.799 [2024-07-12 11:28:52.705525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.799 qpair failed and we were unable to recover it. 00:24:26.799 [2024-07-12 11:28:52.705779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.705842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.706120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.706187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.706400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.706474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.706682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.706746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.706973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.707039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.707252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.707315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.707557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.707620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.707886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.707950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.708155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.708218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.708479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.708542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.708785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.708851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.709156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.709219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.709475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.709539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.709789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.709854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.710170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.710234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.710478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.710542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.710840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.710946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.711194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.711259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.711472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.711535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.711738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.711801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.712057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.712122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.712367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.712430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.712652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.712715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.712973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.713037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.713290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.713353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.713561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.713625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.713892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.713956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.714171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.714234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.714478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.714541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.714765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.714827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.715105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.715168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.715455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.715519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.715737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.715800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.716028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.716091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.716330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.716394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.716682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.716745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.716994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.717059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.717279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.717342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.717594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.717656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.717937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.718001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.718224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.718287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.718555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.718618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.718893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.718967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.719182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.719245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.719494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.719556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.719803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.719884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.720167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.720230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.720471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.720535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.720744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.720809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.721075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.721139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.721382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.721447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.721687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.721751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.721993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.722059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.722277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.722341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.722628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.722691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.722982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.723046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.723281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.723345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.723612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.723674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.723963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.724028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.724278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.724341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.724626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.724690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.724943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.725008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.725224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.725287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.725509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.725572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.725834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.725924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.726181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.726246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.726500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.726563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.726775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.726838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.727128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.727191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.727448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.727513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.727763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.727826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.728131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.728195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.728415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.728478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.728732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.728795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.729089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.729153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.729403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.729466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.800 qpair failed and we were unable to recover it. 00:24:26.800 [2024-07-12 11:28:52.729711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.800 [2024-07-12 11:28:52.729774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.730002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.730067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.730324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.730386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.730632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.730697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.730984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.731050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.731267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.731332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.731619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.731692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.731984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.732048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.732300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.732365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.732631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.732694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.732926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.732990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.733273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.733335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.733624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.733688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.733909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.733972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.734218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.734283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.734545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.734608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.734910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.734974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.735184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.735247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.735501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.735564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.735822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.735917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.736191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.736254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.736508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.736570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.736818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.736896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.737120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.737182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.737430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.737495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.737791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.737854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.738134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.738197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.738484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.738547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.738786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.738852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.739072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.739135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.739422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.739485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.739740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.739803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.740050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.740114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.740339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.740403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.740694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.740756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.740966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.741031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.741290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.741354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.741562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.741624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.741903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.741967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.742218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.742282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.742471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.742532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.742758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.742821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.743087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.743153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.743403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.743467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.743710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.743773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.744009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.744073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.744327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.744399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.744613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.744677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.744933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.744999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.745251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.745315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.745600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.745663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.745946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.746011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.746242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.746305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.746522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.746585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.746882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.746946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.747198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.747262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.747457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.747519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.747809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.747886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.748101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.748164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.748442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.748504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.748734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.748799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.749074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.749138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.749387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.749453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.749741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.749805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.750084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.750149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.750392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.750457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.750662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.801 [2024-07-12 11:28:52.750725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.801 qpair failed and we were unable to recover it. 00:24:26.801 [2024-07-12 11:28:52.750975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.751041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.751290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.751353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.751602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.751665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.751958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.752023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.752320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.752383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.752623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.752688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.752903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.752970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.753219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.753282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.753471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.753534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.753786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.753848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.754098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.754161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.754422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.754484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.754772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.754834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.755145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.755209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.755457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.755520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.755797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.755859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.756183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.756246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.756489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.756552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.756808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.756886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.757106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.757178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.757465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.757528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.757769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.757834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.758131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.758195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.758485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.758548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.758795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.758858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.759093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.759156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.759345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.759408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.759646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.759708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.759912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.759977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.760233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.760296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.760550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.760613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.760843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.760918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.761211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.761274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.761542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.761605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.761823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.761901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.762193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.762256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.762518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.762582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.762842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.762918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.763181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.763243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.763451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.763513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.763799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.763862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.764129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.764192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.764448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.764510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.764759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.764825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.765094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.765158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.765460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.765523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.765754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.765818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.766046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.766111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.766406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.766468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.766707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.766769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.767029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.767095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.767390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.767452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.767701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.767763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.768044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.768109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.768360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.768424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.768652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.768715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.768996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.769061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.769310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.769375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.769675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.769738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.769964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.770040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.770297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.770360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.770576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.770639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.770926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.770990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.771249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.771315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.771570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.771635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.771894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.771958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.772215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.772278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.772459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.772522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.772780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.772843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.773079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.773144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.773341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.773404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.773691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.773753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.774042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.774107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.774330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.774394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.774684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.774747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.774943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.775008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.802 [2024-07-12 11:28:52.775256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.802 [2024-07-12 11:28:52.775322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.802 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.775537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.775602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.775846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.775927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.776142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.776206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.776413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.776478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.776679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.776746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.776987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.777053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.777288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.777350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.777604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.777668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.777885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.777950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.778146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.778219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.778510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.778574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.778777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.778840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.779120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.779183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.779471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.779534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.779823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.779903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.780107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.780172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.780396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.780460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.780661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.780723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.781008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.781074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.781327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.781390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.781678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.781742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.781983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.782048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.782335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.782398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.782665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.782727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.782985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.783051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.783250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.783313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.783562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.783625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.783880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.783944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.784191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.784256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.784548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.784611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.784857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.784937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.785233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.785296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.785556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.785619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.785835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.785915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.786169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.786232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.786487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.786550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.786802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.786885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.787149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.787213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.787499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.787563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.787802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.788014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.788279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.788342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.788606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.788669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.788961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.789026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.789280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.789344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.789565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.789627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.789915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.789979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.790220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.790286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.790558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.790621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.790880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.790945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.803 qpair failed and we were unable to recover it. 00:24:26.803 [2024-07-12 11:28:52.791171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.803 [2024-07-12 11:28:52.791246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.791459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.791522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.791739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.791803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.792071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.792135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.792431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.792494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.792712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.792775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.793065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.793130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.793341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.793407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.793661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.793724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.793936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.794002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.794284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.794349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.794631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.794693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.794951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.795016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.795268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.795332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.795597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.795660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.795882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.795966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.796173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.796237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.796523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.796586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.796831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.796908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.797113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.797176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.797425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.797489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.797748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.797811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.798056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.798119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.798369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.798432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.798678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.798739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.798977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.799042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.799276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.799343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.799544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.799608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.799850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.799928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.800224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.800287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.800500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.800564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.800774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.800838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.801115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.801178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.801439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.801502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.801789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.801851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.802104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.802166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.802385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.802447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.802699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.802762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.803076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.803140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.803388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.803452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.803715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.803787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.804012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.804075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.804372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.804435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.804623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.804687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.804922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.804989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.805276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.805340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.805607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.805669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.805892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.805956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.806243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.806306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.806613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.806676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.806927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.806991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.807256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.807319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.807532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.807594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.807887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.807951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.808253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.808316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.808515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.808578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.808827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.808906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.809210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.809274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.809540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.809602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.809848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.809928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.810217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.810280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.810567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.810630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.810897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.810962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.811178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.811241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.811482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.811545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.811792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.811855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.812109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.812172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.812395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.812458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.812662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.812725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.812971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.813037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.813328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.813391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.813610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.813676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.813953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.814016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.804 qpair failed and we were unable to recover it. 00:24:26.804 [2024-07-12 11:28:52.814305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.804 [2024-07-12 11:28:52.814368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.814581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.814643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.814845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.814922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.815130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.815193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.815476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.815539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.815783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.815845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.816073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.816135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.816336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.816408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.816668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.816731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.816992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.817057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.817314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.817378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.817585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.817648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.817896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.817960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.818224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.818287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.818502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.818566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.818770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.818835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.819098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.819162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.819411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.819475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.819766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.819829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.820062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.820125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.820363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.820427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.820697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.820761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.821005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.821070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.821293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.821355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.821558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.821621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.821896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.821961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.822169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.822235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.822454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.822533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.822847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.822941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.823194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.823267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.823539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.823603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.823850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.823944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.824234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.824299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.824554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.824616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.824886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.824951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.825247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.825310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.825544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.825606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.825852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.825942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.826140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.826203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.826443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.826504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.826722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.826784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.827059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.827123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.827432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.827494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.827753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.827815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.828084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.828147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.828348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.828410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.828702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.828764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.829007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.829080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.829303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.829366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.829611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.829673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.829918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.829982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.830199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.830261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.830552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.830614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.830840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.830918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.831125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.831188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.831433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.831494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.831777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.831839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.832105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.832168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.832419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.832483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.832780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.832842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.833074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.833137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.833391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.833454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.833669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.833735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.834029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.834095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.834338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.834403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.834606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.834670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.834890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.834954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.835241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.835303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.835550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.835613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.835816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.835893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.836161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.836223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.836481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.836543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.836750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.836813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.837116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.837179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.837391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.837455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.837746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.805 [2024-07-12 11:28:52.837809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.805 qpair failed and we were unable to recover it. 00:24:26.805 [2024-07-12 11:28:52.838040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.838105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.838306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.838369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.838573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.838635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.838845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.838929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.839149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.839213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.839497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.839560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.839799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.839862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.840163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.840227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.840491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.840554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.840796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.840859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.841176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.841238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.841442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.841515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.841710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.841776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.842050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.842115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.842363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.842426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.842691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.842753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.843007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.843074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.843405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.843469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.843722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.843784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.844034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.844098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.844287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.844350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.844598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.844662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.844892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.844956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.845242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.845305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.845566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.845628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.845904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.845968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.846204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.846268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.846555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.846617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.846801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.846864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.847135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.847197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.847448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.847512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.847739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.847803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.848122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.848187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.848402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.848466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.848686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.848748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.849000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.849064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.849359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.849424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.849675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.849738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.849957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.850021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.850254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.850317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.850572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.850636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.850897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.850964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.851176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.851238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.851485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.851548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.851836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.851913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.852130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.852195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.852445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.852509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.852799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.852863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.853115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.853180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.853386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.853450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.853650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.853713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.853970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.854046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.854289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.854352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.854574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.854636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.854841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.854917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.855104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.855168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.855451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.855514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.855800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.855863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.856111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.856182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.856424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.856486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.856721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.856784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.857003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.857067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.857284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.857351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.857639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.857702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.857982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.858046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.858259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.858322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.858608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.858670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.806 qpair failed and we were unable to recover it. 00:24:26.806 [2024-07-12 11:28:52.858926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.806 [2024-07-12 11:28:52.858990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.807 qpair failed and we were unable to recover it. 00:24:26.807 [2024-07-12 11:28:52.859278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.807 [2024-07-12 11:28:52.859341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.807 qpair failed and we were unable to recover it. 00:24:26.807 [2024-07-12 11:28:52.859595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.807 [2024-07-12 11:28:52.859657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.807 qpair failed and we were unable to recover it. 00:24:26.807 [2024-07-12 11:28:52.859914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.807 [2024-07-12 11:28:52.859982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.807 qpair failed and we were unable to recover it. 00:24:26.807 [2024-07-12 11:28:52.860233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.807 [2024-07-12 11:28:52.860299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.807 qpair failed and we were unable to recover it. 00:24:26.807 [2024-07-12 11:28:52.860595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.807 [2024-07-12 11:28:52.860659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.807 qpair failed and we were unable to recover it. 00:24:26.807 [2024-07-12 11:28:52.860903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.807 [2024-07-12 11:28:52.860968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.807 qpair failed and we were unable to recover it. 00:24:26.807 [2024-07-12 11:28:52.861213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.807 [2024-07-12 11:28:52.861275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.807 qpair failed and we were unable to recover it. 00:24:26.807 [2024-07-12 11:28:52.861483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.807 [2024-07-12 11:28:52.861548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.807 qpair failed and we were unable to recover it. 00:24:26.807 [2024-07-12 11:28:52.861807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.807 [2024-07-12 11:28:52.861883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.807 qpair failed and we were unable to recover it. 00:24:26.807 [2024-07-12 11:28:52.862096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.807 [2024-07-12 11:28:52.862158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.807 qpair failed and we were unable to recover it. 00:24:26.807 [2024-07-12 11:28:52.862462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.807 [2024-07-12 11:28:52.862525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.807 qpair failed and we were unable to recover it. 00:24:26.807 [2024-07-12 11:28:52.862784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.807 [2024-07-12 11:28:52.862848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.807 qpair failed and we were unable to recover it. 00:24:26.807 [2024-07-12 11:28:52.863144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.807 [2024-07-12 11:28:52.863208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.807 qpair failed and we were unable to recover it. 00:24:26.807 [2024-07-12 11:28:52.863429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.807 [2024-07-12 11:28:52.863492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.807 qpair failed and we were unable to recover it. 00:24:26.807 [2024-07-12 11:28:52.863699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.807 [2024-07-12 11:28:52.863761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.807 qpair failed and we were unable to recover it. 00:24:26.807 [2024-07-12 11:28:52.864073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.807 [2024-07-12 11:28:52.864138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.807 qpair failed and we were unable to recover it. 00:24:26.807 [2024-07-12 11:28:52.864431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.807 [2024-07-12 11:28:52.864493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.807 qpair failed and we were unable to recover it. 00:24:26.807 [2024-07-12 11:28:52.864777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.807 [2024-07-12 11:28:52.864839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.807 qpair failed and we were unable to recover it. 00:24:26.807 [2024-07-12 11:28:52.865093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.807 [2024-07-12 11:28:52.865165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.807 qpair failed and we were unable to recover it. 00:24:26.807 [2024-07-12 11:28:52.865420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.807 [2024-07-12 11:28:52.865484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.807 qpair failed and we were unable to recover it. 00:24:26.807 [2024-07-12 11:28:52.865736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.807 [2024-07-12 11:28:52.865797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.807 qpair failed and we were unable to recover it. 00:24:26.807 [2024-07-12 11:28:52.866049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.807 [2024-07-12 11:28:52.866114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.807 qpair failed and we were unable to recover it. 00:24:26.807 [2024-07-12 11:28:52.866380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.807 [2024-07-12 11:28:52.866444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.807 qpair failed and we were unable to recover it. 00:24:26.807 [2024-07-12 11:28:52.866684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.807 [2024-07-12 11:28:52.866756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.807 qpair failed and we were unable to recover it. 00:24:26.807 [2024-07-12 11:28:52.867068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.807 [2024-07-12 11:28:52.867161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.807 qpair failed and we were unable to recover it. 00:24:26.807 [2024-07-12 11:28:52.867520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.807 [2024-07-12 11:28:52.867609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.807 qpair failed and we were unable to recover it. 00:24:26.807 [2024-07-12 11:28:52.867932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.807 [2024-07-12 11:28:52.868004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.807 qpair failed and we were unable to recover it. 00:24:26.807 [2024-07-12 11:28:52.868274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.807 [2024-07-12 11:28:52.868346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.807 qpair failed and we were unable to recover it. 00:24:26.807 [2024-07-12 11:28:52.868601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.807 [2024-07-12 11:28:52.868668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.807 qpair failed and we were unable to recover it. 00:24:26.807 [2024-07-12 11:28:52.868919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.807 [2024-07-12 11:28:52.868986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.807 qpair failed and we were unable to recover it. 00:24:26.807 [2024-07-12 11:28:52.869239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.807 [2024-07-12 11:28:52.869303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.807 qpair failed and we were unable to recover it. 00:24:26.807 [2024-07-12 11:28:52.869524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.807 [2024-07-12 11:28:52.869590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.807 qpair failed and we were unable to recover it. 00:24:26.807 [2024-07-12 11:28:52.869838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.807 [2024-07-12 11:28:52.869941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.807 qpair failed and we were unable to recover it. 00:24:26.807 [2024-07-12 11:28:52.870200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.807 [2024-07-12 11:28:52.870293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.807 qpair failed and we were unable to recover it. 00:24:26.807 [2024-07-12 11:28:52.870592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.807 [2024-07-12 11:28:52.870663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.807 qpair failed and we were unable to recover it. 00:24:26.807 [2024-07-12 11:28:52.870937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.807 [2024-07-12 11:28:52.871003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.807 qpair failed and we were unable to recover it. 00:24:26.807 [2024-07-12 11:28:52.871255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.807 [2024-07-12 11:28:52.871318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.807 qpair failed and we were unable to recover it. 00:24:26.807 [2024-07-12 11:28:52.871645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.807 [2024-07-12 11:28:52.871709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.807 qpair failed and we were unable to recover it. 00:24:26.807 [2024-07-12 11:28:52.872005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.807 [2024-07-12 11:28:52.872070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.807 qpair failed and we were unable to recover it. 00:24:26.807 [2024-07-12 11:28:52.872356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.807 [2024-07-12 11:28:52.872445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.807 qpair failed and we were unable to recover it. 00:24:26.807 [2024-07-12 11:28:52.872729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.807 [2024-07-12 11:28:52.872820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:26.807 qpair failed and we were unable to recover it. 00:24:26.807 [2024-07-12 11:28:52.873151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.096 [2024-07-12 11:28:52.873216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.096 qpair failed and we were unable to recover it. 00:24:27.096 [2024-07-12 11:28:52.873479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.096 [2024-07-12 11:28:52.873545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.096 qpair failed and we were unable to recover it. 00:24:27.096 [2024-07-12 11:28:52.873805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.096 [2024-07-12 11:28:52.873889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.096 qpair failed and we were unable to recover it. 00:24:27.096 [2024-07-12 11:28:52.874119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.096 [2024-07-12 11:28:52.874183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.096 qpair failed and we were unable to recover it. 00:24:27.096 [2024-07-12 11:28:52.874463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.096 [2024-07-12 11:28:52.874527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.096 qpair failed and we were unable to recover it. 00:24:27.096 [2024-07-12 11:28:52.874749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.096 [2024-07-12 11:28:52.874812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.096 qpair failed and we were unable to recover it. 00:24:27.096 [2024-07-12 11:28:52.875073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.096 [2024-07-12 11:28:52.875137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.096 qpair failed and we were unable to recover it. 00:24:27.096 [2024-07-12 11:28:52.875344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.096 [2024-07-12 11:28:52.875411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.096 qpair failed and we were unable to recover it. 00:24:27.096 [2024-07-12 11:28:52.875623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.096 [2024-07-12 11:28:52.875686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.096 qpair failed and we were unable to recover it. 00:24:27.096 [2024-07-12 11:28:52.875990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.096 [2024-07-12 11:28:52.876055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.096 qpair failed and we were unable to recover it. 00:24:27.096 [2024-07-12 11:28:52.876311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.096 [2024-07-12 11:28:52.876374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.096 qpair failed and we were unable to recover it. 00:24:27.096 [2024-07-12 11:28:52.876625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.096 [2024-07-12 11:28:52.876688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.096 qpair failed and we were unable to recover it. 00:24:27.096 [2024-07-12 11:28:52.876928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.096 [2024-07-12 11:28:52.876992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.096 qpair failed and we were unable to recover it. 00:24:27.096 [2024-07-12 11:28:52.877230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.096 [2024-07-12 11:28:52.877293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.096 qpair failed and we were unable to recover it. 00:24:27.096 [2024-07-12 11:28:52.877539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.096 [2024-07-12 11:28:52.877604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.096 qpair failed and we were unable to recover it. 00:24:27.096 [2024-07-12 11:28:52.877877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.096 [2024-07-12 11:28:52.877953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.096 qpair failed and we were unable to recover it. 00:24:27.096 [2024-07-12 11:28:52.878202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.096 [2024-07-12 11:28:52.878265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.096 qpair failed and we were unable to recover it. 00:24:27.096 [2024-07-12 11:28:52.878482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.096 [2024-07-12 11:28:52.878545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.096 qpair failed and we were unable to recover it. 00:24:27.096 [2024-07-12 11:28:52.878764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.096 [2024-07-12 11:28:52.878831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.096 qpair failed and we were unable to recover it. 00:24:27.096 [2024-07-12 11:28:52.879071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.096 [2024-07-12 11:28:52.879134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.096 qpair failed and we were unable to recover it. 00:24:27.096 [2024-07-12 11:28:52.879357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.096 [2024-07-12 11:28:52.879420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.096 qpair failed and we were unable to recover it. 00:24:27.096 [2024-07-12 11:28:52.879635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.096 [2024-07-12 11:28:52.879698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.096 qpair failed and we were unable to recover it. 00:24:27.096 [2024-07-12 11:28:52.879935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.096 [2024-07-12 11:28:52.880010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.096 qpair failed and we were unable to recover it. 00:24:27.096 [2024-07-12 11:28:52.880251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.096 [2024-07-12 11:28:52.880313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.096 qpair failed and we were unable to recover it. 00:24:27.096 [2024-07-12 11:28:52.880522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.096 [2024-07-12 11:28:52.880585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.096 qpair failed and we were unable to recover it. 00:24:27.096 [2024-07-12 11:28:52.880791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.096 [2024-07-12 11:28:52.880853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.096 qpair failed and we were unable to recover it. 00:24:27.096 [2024-07-12 11:28:52.881122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.096 [2024-07-12 11:28:52.881185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.096 qpair failed and we were unable to recover it. 00:24:27.096 [2024-07-12 11:28:52.881425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.096 [2024-07-12 11:28:52.881488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.096 qpair failed and we were unable to recover it. 00:24:27.096 [2024-07-12 11:28:52.881750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.096 [2024-07-12 11:28:52.881812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.096 qpair failed and we were unable to recover it. 00:24:27.096 [2024-07-12 11:28:52.882049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.096 [2024-07-12 11:28:52.882113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.096 qpair failed and we were unable to recover it. 00:24:27.096 [2024-07-12 11:28:52.882361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.096 [2024-07-12 11:28:52.882423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.096 qpair failed and we were unable to recover it. 00:24:27.096 [2024-07-12 11:28:52.882626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.096 [2024-07-12 11:28:52.882697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.096 qpair failed and we were unable to recover it. 00:24:27.096 [2024-07-12 11:28:52.882905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.096 [2024-07-12 11:28:52.882972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.096 qpair failed and we were unable to recover it. 00:24:27.096 [2024-07-12 11:28:52.883219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.096 [2024-07-12 11:28:52.883282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.096 qpair failed and we were unable to recover it. 00:24:27.096 [2024-07-12 11:28:52.883528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.096 [2024-07-12 11:28:52.883591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.883811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.883904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.884163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.884229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.884470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.884533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.884813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.884891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.885140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.885203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.885489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.885551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.885776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.885840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.886059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.886124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.886377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.886440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.886719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.886781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.886997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.887062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.887316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.887378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.887670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.887733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.887990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.888055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.888314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.888378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.888642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.888705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.888953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.889019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.889303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.889367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.889609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.889672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.889954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.890018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.890273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.890337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.890521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.890584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.890775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.890838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.891074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.891138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.891399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.891462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.891691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.891754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.892018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.892083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.892306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.892380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.892598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.892661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.892916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.892982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.893210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.893274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.893491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.893555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.893801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.893881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.894107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.894171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.894467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.894530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.894753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.894819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.895080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.895144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.895394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.895459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.895670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.895732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.896025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.896090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.896370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.896432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.896728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.896791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.897055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.897119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.897375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.897438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.897647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.897710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.897957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.898023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.898255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.898317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.898572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.898635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.898894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.898958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.899157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.899219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.899485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.899548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.899796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.899858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.900098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.900165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.900423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.900487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.900798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.900861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.901088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.901152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.901397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.901461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.901747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.901809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.902137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.902202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.902462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.902526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.902731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.902794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.903049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.903114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.903357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.903419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.903710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.903773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.903983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.904048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.904248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.904311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.904497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.904560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.904848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.904937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.905231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.905293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.905550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.905613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.905856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.905935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.906200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.906263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.906484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.906547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.906838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.906918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.907162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.907225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.907469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.907532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.097 [2024-07-12 11:28:52.907785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.097 [2024-07-12 11:28:52.907847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.097 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.908103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.908167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.908365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.908429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.908692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.908755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.908998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.909064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.909288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.909351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.909606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.909669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.909893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.909958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.910168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.910230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.910494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.910558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.910802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.910880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.911101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.911163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.911412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.911475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.911665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.911733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.911985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.912050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.912297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.912363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.912614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.912679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.912961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.913027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.913253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.913317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.913566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.913629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.913915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.913989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.914241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.914305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.914543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.914608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.914907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.914973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.915227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.915291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.915529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.915594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.915839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.915956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.916213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.916276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.916492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.916557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.916798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.916861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.917116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.917180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.917482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.917555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.917823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.917898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.918200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.918263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.918522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.918585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.918881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.918952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.919153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.919215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.919465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.919527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.919793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.919855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.920076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.920138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.920424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.920486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.920765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.920828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.921099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.921161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.921395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.921457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.921760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.921821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.922112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.922175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.922379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.922442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.922690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.922755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.922986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.923050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.923301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.923366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.923622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.923685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.923976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.924042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.924346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.924409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.924653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.924716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.925008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.925073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.925326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.925389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.925640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.925701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.925911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.925976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.926262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.926326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.926573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.926639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.926895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.926962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.927258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.927321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.927615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.927677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.927917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.927982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.928283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.928346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.928602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.928665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.928924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.928988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.929197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.098 [2024-07-12 11:28:52.929262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.098 qpair failed and we were unable to recover it. 00:24:27.098 [2024-07-12 11:28:52.929547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.929611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.929904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.929968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.930171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.930236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.930532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.930604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.930811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.930899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.931149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.931213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.931497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.931560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.931806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.931883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.932151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.932214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.932455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.932518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.932801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.932863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.933159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.933228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.933453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.933519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.933812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.933891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.934146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.934211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.934468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.934531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.934736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.934802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.935042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.935106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.935327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.935390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.935632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.935697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.935986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.936052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.936306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.936372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.936588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.936651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.936940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.937004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.937199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.937262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.937556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.937619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.937886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.937951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.938242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.938305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.938495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.938561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.938857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.938940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.939254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.939318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.939619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.939682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.939980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.940047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.940274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.940338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.940585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.940648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.940877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.940946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.941239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.941308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.941520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.941604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.941916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.942009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.942378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.942447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.942700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.942764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.943027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.943092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.943350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.943412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.943632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.943694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.943994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.944060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.944308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.944370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.944653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.944716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.944968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.945035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.945228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.945288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.945537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.945599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.945890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.945954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.946202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.946264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.946509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.946575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.946813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.946893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.947159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.947223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.947513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.947576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.947817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.947918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.948201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.948267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.948529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.948591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.948899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.948963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.949221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.949284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.949509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.949572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.949812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.949894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.950165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.950228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.950472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.950538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.950819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.950900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.951155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.951217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.951499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.951561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.951803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.951902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.952203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.952266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.952521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.952598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.952864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.952947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.953191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.953255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.953555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.953617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.953917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.953982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.954235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.954298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.954554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.954616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.954918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.099 [2024-07-12 11:28:52.954983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.099 qpair failed and we were unable to recover it. 00:24:27.099 [2024-07-12 11:28:52.955239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.955301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.955552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.955617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.955912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.955977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.956223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.956285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.956531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.956594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.956907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.956972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.957273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.957336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.957531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.957594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.957807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.957888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.958127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.958190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.958453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.958514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.958764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.958826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.959113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.959177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.959426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.959488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.959721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.959784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.960064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.960128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.960423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.960485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.960729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.960790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.961094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.961159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.961472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.961534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.961774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.961836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.962149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.962212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.962495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.962558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.962817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.962895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.963100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.963164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.963396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.963458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.963687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.963750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.964036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.964101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.964409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.964472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.964764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.964826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.965138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.965201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.965449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.965511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.965758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.965830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.966105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.966168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.966418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.966480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.966782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.966845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.967117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.967180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.967460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.967522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.967777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.967839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.968152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.968215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.968507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.968569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.968884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.968948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.969194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.969256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.969498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.969560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.969775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.969840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.970108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.970170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.970471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.970533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.970793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.970856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.971127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.971193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.971477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.971539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.971802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.971895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.972192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.972256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.972474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.972536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.972779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.972842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.973116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.973179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.973484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.973547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.973762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.973824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.974094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.974157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.974439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.974500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.974805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.974884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.975194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.975257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.975459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.975521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.975721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.975784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.976064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.976129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.976382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.976445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.976697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.976759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.977019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.977083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.977329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.977391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.977672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.977734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.977934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.977999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.978245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.978308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.978550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.978612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.978899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.978974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.979244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.979306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.979561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.979622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.979817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.979910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.980198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.980261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.980491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.100 [2024-07-12 11:28:52.980552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.100 qpair failed and we were unable to recover it. 00:24:27.100 [2024-07-12 11:28:52.980803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.980881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.981191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.981253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.981452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.981514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.981757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.981822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.982098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.982162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.982460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.982522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.982740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.982803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.983075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.983138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.983365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.983427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.983717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.983779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.984030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.984095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.984350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.984413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.984675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.984737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.984975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.985041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.985308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.985370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.985616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.985680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.985925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.985991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.986183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.986245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.986453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.986516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.986767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.986829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.987044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.987109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.987363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.987425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.987641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.987704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.987987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.988050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.988263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.988325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.988610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.988672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.988914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.988979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.989220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.989283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.989492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.989556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.989809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.989885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.990140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.990202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.990404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.990469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.990750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.990813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.991072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.991135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.991335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.991408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.991696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.991758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.992028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.992092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.992362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.992425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.992693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.992755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.993051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.993116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.993367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.993430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.993682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.993744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.994033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.994098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.994389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.994453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.994703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.994766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.994977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.995041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.995292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.995353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.995608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.995670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.995924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.995990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.996279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.996341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.996605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.996667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.996951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.997015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.997314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.997376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.997622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.997684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.997944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.998010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.998259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.998322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.998582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.998645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.998856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.998934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.999188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.999251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.999491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.999554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:52.999840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:52.999936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:53.000241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:53.000304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:53.000518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:53.000580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:53.000842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:53.000922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:53.001146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:53.001209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:53.001461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:53.001522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:53.001779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:53.001841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:53.002161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:53.002225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:53.002525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:53.002587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:53.002893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:53.002957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:53.003246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:53.003310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.101 [2024-07-12 11:28:53.003519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.101 [2024-07-12 11:28:53.003582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.101 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.003831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.003932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.004225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.004291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.004586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.004660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.004883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.004950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.005247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.005310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.005599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.005661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.005900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.005964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.006156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.006219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.006464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.006526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.006760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.006822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.007087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.007151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.007441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.007503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.007773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.007836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.008141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.008203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.008467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.008529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.008830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.008907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.009179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.009241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.009495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.009557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.009852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.009933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.010146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.010209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.010495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.010557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.010845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.010926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.011174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.011237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.011486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.011551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.011811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.011901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.012209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.012272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.012561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.012623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.012918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.012985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.013189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.013250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.013505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.013568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.013888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.013953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.014202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.014266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.014573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.014638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.014845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.014925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.015152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.015214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.015511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.015574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.015910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.015975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.016262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.016326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.016587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.016648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.016906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.016972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.017221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.017287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.017536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.017598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.017890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.017970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.018272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.018336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.018641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.018703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.019001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.019065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.019287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.019350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.019603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.019668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.019914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.019981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.020230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.020294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.020550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.020613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.020882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.020947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.021214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.021276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.021567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.021630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.021888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.021953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.022240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.022302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.022613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.022688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.022959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.023040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.023278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.023357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.023656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.023720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.024027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.024093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.024342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.024405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.024693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.024757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.025017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.025083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.025371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.025433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.025648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.025711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.025993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.026057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.026307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.026373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.026586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.026650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.026921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.026986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.027276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.027339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.027584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.027646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.027900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.027967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.028270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.028333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.028589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.028653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.028962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.029027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.029289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.102 [2024-07-12 11:28:53.029352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.102 qpair failed and we were unable to recover it. 00:24:27.102 [2024-07-12 11:28:53.029635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.029697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.030011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.030075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.030375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.030438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.030739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.030802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.031094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.031159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.031416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.031488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.031703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.031765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.032092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.032157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.032411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.032475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.032682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.032745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.033031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.033097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.033314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.033377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.033636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.033702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.033914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.033979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.034267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.034330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.034614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.034676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.034978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.035043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.035256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.035318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.035576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.035639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.035883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.035948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.036206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.036268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.036531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.036594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.036899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.036963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.037254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.037316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.037600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.037663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.037920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.037984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.038266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.038329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.038616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.038679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.038923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.038987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.039242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.039305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.039597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.039660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.039978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.040043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.040308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.040372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.040674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.040737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.040998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.041064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.041352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.041414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.041674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.041737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.041933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.041997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.042253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.042318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.042568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.042632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.042836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.042912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.043201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.043263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.043486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.043549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.043809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.043900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.044219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.044282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.044580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.044652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.044904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.044969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.045265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.045327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.045617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.045680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.045970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.046034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.046324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.046387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.046622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.046683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.046975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.047040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.047298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.047361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.047655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.047716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.048004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.048069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.048327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.048389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.048643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.048704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.048955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.049020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.049269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.049332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.049593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.049655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.049934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.049999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.050300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.050363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.050626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.050688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.050931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.050995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.051234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.051296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.051595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.051657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.051914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.051980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.052227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.052291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.052537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.052599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.052857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.052940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.053185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.103 [2024-07-12 11:28:53.053250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.103 qpair failed and we were unable to recover it. 00:24:27.103 [2024-07-12 11:28:53.053493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.053556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.053843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.053925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.054212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.054275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.054531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.054594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.054886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.054950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.055238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.055301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.055585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.055648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.055956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.056021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.056317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.056380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.056586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.056651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.056948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.057013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.057254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.057316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.057562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.057625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.057846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.057933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.058152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.058214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.058510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.058572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.058820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.058896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.059131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.059193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.059440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.059503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.059744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.059806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.060058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.060122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.060366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.060428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.060686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.060749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.061040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.061107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.061408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.061472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.061669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.061732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.062043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.062107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.062406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.062468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.062723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.062786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.063018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.063082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.063373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.063435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.063696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.063759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.063976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.064041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.064331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.064395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.064690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.064753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.065027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.065092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.065385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.065449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.065711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.065774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.066029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.066093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.066303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.066366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.066574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.066638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.066913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.066977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.067222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.067285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.067512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.067575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.067830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.067905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.068202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.068264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.068557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.068619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.068861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.068941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.069169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.069232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.069455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.069520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.069774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.069837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.070072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.070136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.070400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.070463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.070755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.070828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.071099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.071163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.071453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.071515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.071730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.071793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.072035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.072100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.072340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.072403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.072650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.072712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.072931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.072996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.073238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.073301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.073544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.073607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.073901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.073966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.074195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.074258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.074523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.074585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.074844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.074935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.075159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.075222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.075506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.075569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.075821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.075901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.076149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.076212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.076505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.076568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.076784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.076845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.077124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.077188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.077412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.077475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.077716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.077779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.078020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.078085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.078282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.078347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.104 qpair failed and we were unable to recover it. 00:24:27.104 [2024-07-12 11:28:53.078565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.104 [2024-07-12 11:28:53.078630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.078884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.078950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.079179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.079241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.079495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.079558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.079802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.079897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.080123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.080185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.080408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.080471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.080712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.080777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.081009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.081076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.081337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.081399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.081640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.081705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.081989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.082054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.082343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.082406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.082614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.082678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.082943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.083007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.083297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.083376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.083627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.083690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.083911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.083978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.084195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.084258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.084472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.084535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.084824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.084903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.085103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.085167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.085462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.085524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.085776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.085839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.086125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.086188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.086408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.086473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.086709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.086772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.087042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.087107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.087303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.087365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.087615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.087678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.087968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.088034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.088286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.088350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.088606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.088669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.088918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.088983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.089197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.089259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.089509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.089572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.089862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.089939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.090234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.090296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.090505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.090568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.090805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.090881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.091087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.091151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.091375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.091438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.091667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.091732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.091930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.091996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.092287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.092350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.092551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.092615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.092878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.092943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.093147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.093212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.093492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.093554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.093765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.093827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.094130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.094194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.094496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.094558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.094809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.094885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.095084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.095146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.095362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.095425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.095662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.095735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.096029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.096094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.096389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.096452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.096743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.096805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.097070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.097134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.097337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.097399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.097658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.097721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.097982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.098046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.098295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.098356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.098604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.098666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.098922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.098987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.099279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.099342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.099551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.099617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.099906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.099973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.100266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.100327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.105 qpair failed and we were unable to recover it. 00:24:27.105 [2024-07-12 11:28:53.100576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.105 [2024-07-12 11:28:53.100639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.100914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.100978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.101269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.101331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.101624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.101686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.101934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.101998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.102260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.102323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.102545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.102607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.102858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.102935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.103186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.103251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.103458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.103522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.103733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.103795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.104085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.104151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.104458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.104519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.104742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.104805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.105074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.105139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.105330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.105392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.105639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.105702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.105946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.106012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.106236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.106298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.106539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.106603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.106847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.106926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.107151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.107215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.107469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.107533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.107705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.107768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.108021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.108089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.108308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.108373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.108603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.108665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.108921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.108988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.109238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.109301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.109524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.109586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.109843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.109926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.110141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.110207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.110500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.110564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.110792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.110857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.111131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.111193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.111419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.111482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.111725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.111789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.112078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.112143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.112437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.112501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.112715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.112780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.113039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.113104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.113393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.113456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.113663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.113726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.113977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.114042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.114287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.114349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.114614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.114677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.114926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.114989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.115180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.115242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.115484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.115548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.115797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.115863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.116171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.116234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.116441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.116506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.116732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.116805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.117078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.117142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.117444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.117507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.117710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.117773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.118009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.118074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.118334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.118397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.118662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.118724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.118945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.119013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.119317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.119381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.119582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.119644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.119904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.119969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.120223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.120286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.120538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.120600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.120859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.120940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.121154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.121217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.121468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.121531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.121836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.121914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.122175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.122237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.122454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.122518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.122726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.122788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.123015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.123079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.123325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.123388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.123630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.123691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.123937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.124003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.124199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.106 [2024-07-12 11:28:53.124262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.106 qpair failed and we were unable to recover it. 00:24:27.106 [2024-07-12 11:28:53.124492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.124554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.124800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.124862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.125156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.125219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.125432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.125494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.125712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.125774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.126054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.126119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.126325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.126389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.126679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.126742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.126957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.127022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.127239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.127302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.127538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.127600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.127843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.127924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.128139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.128202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.128423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.128485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.128732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.128797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.129088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.129162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.129415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.129480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.129736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.129798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.130062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.130127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.130324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.130386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.130596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.130660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.130916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.130981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.131220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.131282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.131487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.131551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.131766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.131829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.132123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.132190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.132396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.132458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.132692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.132754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.132991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.133056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.133311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.133373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.133606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.133669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.133953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.134016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.134261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.134324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.134549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.134612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.134822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.134902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.135153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.135216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.135500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.135563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.135771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.135833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.136055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.136119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.136420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.136483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.136755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.136817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.137097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.137161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.137386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.137449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.137708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.137770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.138046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.138113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.138373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.138436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.138682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.138748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.138969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.139035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.139251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.139314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.139527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.139591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.139800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.139862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.140132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.140194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.140447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.140511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.140731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.140793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.141060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.141124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.141345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.141418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.141618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.141680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.141903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.141970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.142214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.142277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.142563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.142626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.142894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.142959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.143259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.143321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.143533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.143596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.143827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.143907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.144139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.144201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.144438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.144501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.144750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.144813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.145040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.145104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.145322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.145385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.145646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.145708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.107 [2024-07-12 11:28:53.145932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.107 [2024-07-12 11:28:53.146000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.107 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.146219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.146285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.146517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.146579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.146809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.146886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.147098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.147161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.147364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.147428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.147690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.147753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.148010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.148072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.148326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.148386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.148628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.148690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.148979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.149044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.149334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.149397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.149624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.149686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.149945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.150011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.150238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.150301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.150591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.150654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.150911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.150975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.151266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.151330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.151562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.151625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.151891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.151954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.152201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.152264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.152457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.152523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.152747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.152809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.153033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.153097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.153354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.153417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.153708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.153780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.154053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.154118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.154427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.154489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.154693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.154756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.155003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.155070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.155360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.155423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.155674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.155737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.155970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.156035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.156293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.156355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.156635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.156697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.156943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.157007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.157253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.157318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.157565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.157627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.157893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.157956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.158254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.158316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.158540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.158603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.158859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.158936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.159184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.159247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.159508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.159571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.159778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.159840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.160087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.160151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.160399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.160461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.160713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.160778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.161060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.161124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.161367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.161430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.161664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.161727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.162024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.162088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.162286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.162349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.162561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.162625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.162841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.162921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.163189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.163253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.163484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.163547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.163783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.163845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.164070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.164134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.164315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.164378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.164636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.164698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.164989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.165053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.165300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.165362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.165568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.165631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.165898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.165963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.166160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.166234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.166491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.166555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.166833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.166926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.167238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.167300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.167544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.167607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.167811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.167893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.168151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.168216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.168440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.168502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.168795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.168858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.169113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.169175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.169420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.169482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.169701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.169764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.108 qpair failed and we were unable to recover it. 00:24:27.108 [2024-07-12 11:28:53.170036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.108 [2024-07-12 11:28:53.170101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.170371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.170434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.170689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.170751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.171008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.171074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.171294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.171360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.171600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.171662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.171920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.171984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.172222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.172286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.172590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.172653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.172893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.172957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.173206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.173268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.173519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.173584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.173805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.173884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.174151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.174215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.174428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.174492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.174794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.174857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.175138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.175202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.175470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.175533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.175778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.175840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.176151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.176215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.176464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.176525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.176808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.176902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.177160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.177225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.177483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.177545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.177801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.177863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.178152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.178215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.178404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.178467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.178762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.178824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.179158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.179233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.179484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.179546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.179813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.179894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.180202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.180265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.180528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.180591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.180864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.180945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.181204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.181267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.181546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.181609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.181880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.181945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.182186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.182250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.182501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.182566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.182853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.182948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.183249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.183312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.183510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.183573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.183788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.183850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.184058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.184122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.184335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.184399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.184603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.184665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.184913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.184979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.185236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.185300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.185596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.185658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.185853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.185931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.186135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.186198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.186444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.186507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.186767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.186829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.187096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.187158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.187401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.187465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.187727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.187790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.188109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.188172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.188424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.188486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.188705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.188767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.189028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.189092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.189313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.189376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.189625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.189688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.189900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.189965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.190254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.190316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.190570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.190631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.190901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.190966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.191205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.191269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.191532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.191594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.191895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.191970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.192276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.192338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.192539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.192602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.192856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.192933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.193176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.193241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.193462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.193526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.193800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.193862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.194171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.109 [2024-07-12 11:28:53.194233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.109 qpair failed and we were unable to recover it. 00:24:27.109 [2024-07-12 11:28:53.194447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.110 [2024-07-12 11:28:53.194510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.110 qpair failed and we were unable to recover it. 00:24:27.110 [2024-07-12 11:28:53.194752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.110 [2024-07-12 11:28:53.194816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.110 qpair failed and we were unable to recover it. 00:24:27.110 [2024-07-12 11:28:53.195096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.110 [2024-07-12 11:28:53.195161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.110 qpair failed and we were unable to recover it. 00:24:27.110 [2024-07-12 11:28:53.195360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.110 [2024-07-12 11:28:53.195426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.110 qpair failed and we were unable to recover it. 00:24:27.110 [2024-07-12 11:28:53.195620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.110 [2024-07-12 11:28:53.195683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.110 qpair failed and we were unable to recover it. 00:24:27.110 [2024-07-12 11:28:53.195951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.110 [2024-07-12 11:28:53.196043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.110 qpair failed and we were unable to recover it. 00:24:27.110 [2024-07-12 11:28:53.196416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.110 [2024-07-12 11:28:53.196487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.110 qpair failed and we were unable to recover it. 00:24:27.110 [2024-07-12 11:28:53.196742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.110 [2024-07-12 11:28:53.196806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.110 qpair failed and we were unable to recover it. 00:24:27.110 [2024-07-12 11:28:53.197097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.110 [2024-07-12 11:28:53.197161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.110 qpair failed and we were unable to recover it. 00:24:27.110 [2024-07-12 11:28:53.197425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.110 [2024-07-12 11:28:53.197488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.110 qpair failed and we were unable to recover it. 00:24:27.110 [2024-07-12 11:28:53.197771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.110 [2024-07-12 11:28:53.197833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.110 qpair failed and we were unable to recover it. 00:24:27.110 [2024-07-12 11:28:53.198109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.110 [2024-07-12 11:28:53.198174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.110 qpair failed and we were unable to recover it. 00:24:27.110 [2024-07-12 11:28:53.198448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.110 [2024-07-12 11:28:53.198511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.110 qpair failed and we were unable to recover it. 00:24:27.110 [2024-07-12 11:28:53.198771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.110 [2024-07-12 11:28:53.198834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.110 qpair failed and we were unable to recover it. 00:24:27.110 [2024-07-12 11:28:53.199143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.110 [2024-07-12 11:28:53.199206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.110 qpair failed and we were unable to recover it. 00:24:27.110 [2024-07-12 11:28:53.199462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.110 [2024-07-12 11:28:53.199525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.110 qpair failed and we were unable to recover it. 00:24:27.110 [2024-07-12 11:28:53.199769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.110 [2024-07-12 11:28:53.199831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.110 qpair failed and we were unable to recover it. 00:24:27.110 [2024-07-12 11:28:53.200107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.110 [2024-07-12 11:28:53.200198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.110 qpair failed and we were unable to recover it. 00:24:27.110 [2024-07-12 11:28:53.200514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.110 [2024-07-12 11:28:53.200584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.110 qpair failed and we were unable to recover it. 00:24:27.110 [2024-07-12 11:28:53.200902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.110 [2024-07-12 11:28:53.200969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.110 qpair failed and we were unable to recover it. 00:24:27.110 [2024-07-12 11:28:53.201213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.110 [2024-07-12 11:28:53.201276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.110 qpair failed and we were unable to recover it. 00:24:27.110 [2024-07-12 11:28:53.201567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.110 [2024-07-12 11:28:53.201630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.110 qpair failed and we were unable to recover it. 00:24:27.110 [2024-07-12 11:28:53.201914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.110 [2024-07-12 11:28:53.201978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.110 qpair failed and we were unable to recover it. 00:24:27.110 [2024-07-12 11:28:53.202181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.110 [2024-07-12 11:28:53.202245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.110 qpair failed and we were unable to recover it. 00:24:27.110 [2024-07-12 11:28:53.202529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.110 [2024-07-12 11:28:53.202590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.110 qpair failed and we were unable to recover it. 00:24:27.110 [2024-07-12 11:28:53.202896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.110 [2024-07-12 11:28:53.202961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.110 qpair failed and we were unable to recover it. 00:24:27.110 [2024-07-12 11:28:53.203221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.110 [2024-07-12 11:28:53.203283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.110 qpair failed and we were unable to recover it. 00:24:27.110 [2024-07-12 11:28:53.203538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.110 [2024-07-12 11:28:53.203600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.110 qpair failed and we were unable to recover it. 00:24:27.110 [2024-07-12 11:28:53.203846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.110 [2024-07-12 11:28:53.203924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.110 qpair failed and we were unable to recover it. 00:24:27.110 [2024-07-12 11:28:53.204133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.110 [2024-07-12 11:28:53.204195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.110 qpair failed and we were unable to recover it. 00:24:27.110 [2024-07-12 11:28:53.204448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.110 [2024-07-12 11:28:53.204510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.110 qpair failed and we were unable to recover it. 00:24:27.110 [2024-07-12 11:28:53.204764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.110 [2024-07-12 11:28:53.204826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.110 qpair failed and we were unable to recover it. 00:24:27.110 [2024-07-12 11:28:53.205038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.110 [2024-07-12 11:28:53.205112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.110 qpair failed and we were unable to recover it. 00:24:27.110 [2024-07-12 11:28:53.205403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.110 [2024-07-12 11:28:53.205467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.110 qpair failed and we were unable to recover it. 00:24:27.110 [2024-07-12 11:28:53.205760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.110 [2024-07-12 11:28:53.205850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.110 qpair failed and we were unable to recover it. 00:24:27.110 [2024-07-12 11:28:53.206231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.110 [2024-07-12 11:28:53.206304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.110 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-12 11:28:53.206603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-12 11:28:53.206668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-12 11:28:53.206922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-12 11:28:53.206988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-12 11:28:53.207241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-12 11:28:53.207306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-12 11:28:53.207575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-12 11:28:53.207638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-12 11:28:53.207927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-12 11:28:53.207992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-12 11:28:53.208286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-12 11:28:53.208349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-12 11:28:53.208641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-12 11:28:53.208704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-12 11:28:53.208955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-12 11:28:53.209021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-12 11:28:53.209243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-12 11:28:53.209308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-12 11:28:53.209525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.386 [2024-07-12 11:28:53.209588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.386 qpair failed and we were unable to recover it. 00:24:27.386 [2024-07-12 11:28:53.209914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-12 11:28:53.209980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-12 11:28:53.210271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-12 11:28:53.210334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-12 11:28:53.210542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-12 11:28:53.210605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-12 11:28:53.210856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-12 11:28:53.210939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-12 11:28:53.211239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-12 11:28:53.211301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-12 11:28:53.211585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-12 11:28:53.211648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-12 11:28:53.211897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-12 11:28:53.211962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-12 11:28:53.212214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-12 11:28:53.212281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-12 11:28:53.212574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-12 11:28:53.212640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-12 11:28:53.212898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-12 11:28:53.212965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-12 11:28:53.213210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-12 11:28:53.213275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-12 11:28:53.213533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-12 11:28:53.213595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-12 11:28:53.213791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-12 11:28:53.213853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-12 11:28:53.214195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-12 11:28:53.214258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-12 11:28:53.214514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-12 11:28:53.214576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-12 11:28:53.214891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-12 11:28:53.214954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-12 11:28:53.215219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-12 11:28:53.215282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-12 11:28:53.215542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-12 11:28:53.215606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-12 11:28:53.215843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-12 11:28:53.215925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-12 11:28:53.216189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-12 11:28:53.216252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-12 11:28:53.216503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-12 11:28:53.216568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-12 11:28:53.216822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-12 11:28:53.216915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-12 11:28:53.217215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-12 11:28:53.217278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-12 11:28:53.217524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-12 11:28:53.217587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-12 11:28:53.217835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-12 11:28:53.217922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-12 11:28:53.218187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-12 11:28:53.218248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-12 11:28:53.218548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-12 11:28:53.218621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-12 11:28:53.218823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-12 11:28:53.218905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-12 11:28:53.219212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-12 11:28:53.219276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-12 11:28:53.219539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-12 11:28:53.219602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-12 11:28:53.219847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-12 11:28:53.219926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-12 11:28:53.220218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-12 11:28:53.220281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-12 11:28:53.220533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-12 11:28:53.220596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-12 11:28:53.220847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-12 11:28:53.220943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-12 11:28:53.221199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-12 11:28:53.221263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-12 11:28:53.221515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-12 11:28:53.221582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-12 11:28:53.221822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-12 11:28:53.221904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-12 11:28:53.222121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-12 11:28:53.222184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-12 11:28:53.222385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-12 11:28:53.222462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-12 11:28:53.222775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.387 [2024-07-12 11:28:53.222848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.387 qpair failed and we were unable to recover it. 00:24:27.387 [2024-07-12 11:28:53.223181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-12 11:28:53.223257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-12 11:28:53.223532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-12 11:28:53.223604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-12 11:28:53.223901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-12 11:28:53.223974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-12 11:28:53.224218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-12 11:28:53.224282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-12 11:28:53.224570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-12 11:28:53.224633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-12 11:28:53.224893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-12 11:28:53.224958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-12 11:28:53.225263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-12 11:28:53.225325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-12 11:28:53.225572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-12 11:28:53.225634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-12 11:28:53.225893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-12 11:28:53.225956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-12 11:28:53.226207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-12 11:28:53.226271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-12 11:28:53.226528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-12 11:28:53.226590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-12 11:28:53.226893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-12 11:28:53.226957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-12 11:28:53.227254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-12 11:28:53.227316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-12 11:28:53.227610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-12 11:28:53.227672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-12 11:28:53.227918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-12 11:28:53.227983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-12 11:28:53.228234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-12 11:28:53.228296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-12 11:28:53.228548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-12 11:28:53.228613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-12 11:28:53.228824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-12 11:28:53.228918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-12 11:28:53.229225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-12 11:28:53.229287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-12 11:28:53.229590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-12 11:28:53.229652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-12 11:28:53.229909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-12 11:28:53.229974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-12 11:28:53.230272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-12 11:28:53.230333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-12 11:28:53.230584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-12 11:28:53.230649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-12 11:28:53.230955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-12 11:28:53.231020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-12 11:28:53.231317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-12 11:28:53.231379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-12 11:28:53.231633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-12 11:28:53.231695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-12 11:28:53.231896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-12 11:28:53.231970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-12 11:28:53.232196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-12 11:28:53.232259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-12 11:28:53.232503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-12 11:28:53.232564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-12 11:28:53.232811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-12 11:28:53.232886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-12 11:28:53.233148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-12 11:28:53.233213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-12 11:28:53.233512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-12 11:28:53.233572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-12 11:28:53.233854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-12 11:28:53.233931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-12 11:28:53.234218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-12 11:28:53.234277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-12 11:28:53.234595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-12 11:28:53.234654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-12 11:28:53.234854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-12 11:28:53.234931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-12 11:28:53.235213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-12 11:28:53.235273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-12 11:28:53.235530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.388 [2024-07-12 11:28:53.235589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.388 qpair failed and we were unable to recover it. 00:24:27.388 [2024-07-12 11:28:53.235834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-12 11:28:53.235912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-12 11:28:53.236099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-12 11:28:53.236159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-12 11:28:53.236466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-12 11:28:53.236525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-12 11:28:53.236783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-12 11:28:53.236843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-12 11:28:53.237097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-12 11:28:53.237157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-12 11:28:53.237380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-12 11:28:53.237440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-12 11:28:53.237684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-12 11:28:53.237746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-12 11:28:53.237988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-12 11:28:53.238051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-12 11:28:53.238269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-12 11:28:53.238332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-12 11:28:53.238527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-12 11:28:53.238586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-12 11:28:53.238800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-12 11:28:53.238863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-12 11:28:53.239187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-12 11:28:53.239247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-12 11:28:53.239541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-12 11:28:53.239603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-12 11:28:53.239811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-12 11:28:53.239887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-12 11:28:53.240104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-12 11:28:53.240166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-12 11:28:53.240492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-12 11:28:53.240556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-12 11:28:53.240753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-12 11:28:53.240817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-12 11:28:53.241043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-12 11:28:53.241108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-12 11:28:53.241339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-12 11:28:53.241401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-12 11:28:53.241622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-12 11:28:53.241685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-12 11:28:53.241926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-12 11:28:53.241991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-12 11:28:53.242213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-12 11:28:53.242276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-12 11:28:53.242558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-12 11:28:53.242621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-12 11:28:53.242886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-12 11:28:53.242950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-12 11:28:53.243201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-12 11:28:53.243264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-12 11:28:53.243517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-12 11:28:53.243579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-12 11:28:53.243789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-12 11:28:53.243852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-12 11:28:53.244079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-12 11:28:53.244166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-12 11:28:53.244416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-12 11:28:53.244490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-12 11:28:53.244788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-12 11:28:53.244898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-12 11:28:53.245218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-12 11:28:53.245289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-12 11:28:53.245526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-12 11:28:53.245589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-12 11:28:53.245847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-12 11:28:53.245944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-12 11:28:53.246201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-12 11:28:53.246264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-12 11:28:53.246493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-12 11:28:53.246555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-12 11:28:53.246814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-12 11:28:53.246897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.389 [2024-07-12 11:28:53.247128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.389 [2024-07-12 11:28:53.247191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.389 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-12 11:28:53.247393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-12 11:28:53.247458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-12 11:28:53.247670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-12 11:28:53.247732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-12 11:28:53.247992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-12 11:28:53.248057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-12 11:28:53.248325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-12 11:28:53.248388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-12 11:28:53.248590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-12 11:28:53.248652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-12 11:28:53.248903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-12 11:28:53.248967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-12 11:28:53.249221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-12 11:28:53.249286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-12 11:28:53.249468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-12 11:28:53.249531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-12 11:28:53.249784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-12 11:28:53.249846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-12 11:28:53.250055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-12 11:28:53.250120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-12 11:28:53.250340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-12 11:28:53.250403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-12 11:28:53.250611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-12 11:28:53.250677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-12 11:28:53.250916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-12 11:28:53.250982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-12 11:28:53.251241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-12 11:28:53.251308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-12 11:28:53.251543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-12 11:28:53.251605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-12 11:28:53.251847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-12 11:28:53.251931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-12 11:28:53.252179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-12 11:28:53.252245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-12 11:28:53.252508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-12 11:28:53.252571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-12 11:28:53.252820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-12 11:28:53.252902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-12 11:28:53.253128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-12 11:28:53.253191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-12 11:28:53.253478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-12 11:28:53.253541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-12 11:28:53.253799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-12 11:28:53.253864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-12 11:28:53.254136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-12 11:28:53.254199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-12 11:28:53.254424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-12 11:28:53.254489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-12 11:28:53.254778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-12 11:28:53.254842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-12 11:28:53.255126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-12 11:28:53.255189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-12 11:28:53.255474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-12 11:28:53.255537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-12 11:28:53.255803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-12 11:28:53.255884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-12 11:28:53.256113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-12 11:28:53.256176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-12 11:28:53.256439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-12 11:28:53.256503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-12 11:28:53.256746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-12 11:28:53.256808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-12 11:28:53.257081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-12 11:28:53.257155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-12 11:28:53.257404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-12 11:28:53.257467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-12 11:28:53.257674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-12 11:28:53.257736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-12 11:28:53.257972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-12 11:28:53.258038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-12 11:28:53.258258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-12 11:28:53.258321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-12 11:28:53.258603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-12 11:28:53.258665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-12 11:28:53.258927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-12 11:28:53.258991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-12 11:28:53.259286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-12 11:28:53.259348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-12 11:28:53.259591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.390 [2024-07-12 11:28:53.259653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.390 qpair failed and we were unable to recover it. 00:24:27.390 [2024-07-12 11:28:53.259908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-12 11:28:53.259973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-12 11:28:53.260230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-12 11:28:53.260291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-12 11:28:53.260495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-12 11:28:53.260558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-12 11:28:53.260811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-12 11:28:53.260888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-12 11:28:53.261159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-12 11:28:53.261221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-12 11:28:53.261522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-12 11:28:53.261584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-12 11:28:53.261803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-12 11:28:53.261895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-12 11:28:53.262108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-12 11:28:53.262171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-12 11:28:53.262441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-12 11:28:53.262503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-12 11:28:53.262748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-12 11:28:53.262810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-12 11:28:53.263036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-12 11:28:53.263101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-12 11:28:53.263388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-12 11:28:53.263450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-12 11:28:53.263662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-12 11:28:53.263724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-12 11:28:53.263939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-12 11:28:53.263973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-12 11:28:53.264075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-12 11:28:53.264109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-12 11:28:53.264229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-12 11:28:53.264263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-12 11:28:53.264392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-12 11:28:53.264425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-12 11:28:53.264531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-12 11:28:53.264564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-12 11:28:53.264676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-12 11:28:53.264711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-12 11:28:53.264826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-12 11:28:53.264860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-12 11:28:53.265011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-12 11:28:53.265044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-12 11:28:53.265155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-12 11:28:53.265234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-12 11:28:53.265461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-12 11:28:53.265523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-12 11:28:53.265712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-12 11:28:53.265746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-12 11:28:53.265896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-12 11:28:53.265930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-12 11:28:53.266075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-12 11:28:53.266138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-12 11:28:53.266326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-12 11:28:53.266388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-12 11:28:53.266611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-12 11:28:53.266673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-12 11:28:53.266901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-12 11:28:53.266934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-12 11:28:53.267088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-12 11:28:53.267151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-12 11:28:53.267441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-12 11:28:53.267504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-12 11:28:53.267740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-12 11:28:53.267803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-12 11:28:53.268021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-12 11:28:53.268055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.391 qpair failed and we were unable to recover it. 00:24:27.391 [2024-07-12 11:28:53.268200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.391 [2024-07-12 11:28:53.268262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-12 11:28:53.268460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-12 11:28:53.268523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-12 11:28:53.268723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-12 11:28:53.268785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-12 11:28:53.268978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-12 11:28:53.269012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-12 11:28:53.269163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-12 11:28:53.269226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-12 11:28:53.269528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-12 11:28:53.269590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-12 11:28:53.269844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-12 11:28:53.269886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-12 11:28:53.270002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-12 11:28:53.270035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-12 11:28:53.270218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-12 11:28:53.270280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-12 11:28:53.270521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-12 11:28:53.270584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-12 11:28:53.270814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-12 11:28:53.270847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-12 11:28:53.270995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-12 11:28:53.271029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-12 11:28:53.271220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-12 11:28:53.271283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-12 11:28:53.271522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-12 11:28:53.271585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-12 11:28:53.271789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-12 11:28:53.271823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-12 11:28:53.271978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-12 11:28:53.272012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-12 11:28:53.272138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-12 11:28:53.272170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-12 11:28:53.272299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-12 11:28:53.272362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-12 11:28:53.272608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-12 11:28:53.272670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-12 11:28:53.272893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-12 11:28:53.272927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-12 11:28:53.273076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-12 11:28:53.273109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-12 11:28:53.273317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-12 11:28:53.273379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-12 11:28:53.273580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-12 11:28:53.273642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-12 11:28:53.273845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-12 11:28:53.273886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-12 11:28:53.274035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-12 11:28:53.274067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-12 11:28:53.274272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-12 11:28:53.274344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-12 11:28:53.274589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-12 11:28:53.274652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-12 11:28:53.274948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-12 11:28:53.274981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-12 11:28:53.275202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-12 11:28:53.275264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-12 11:28:53.275455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-12 11:28:53.275516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-12 11:28:53.275707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-12 11:28:53.275768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-12 11:28:53.276011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-12 11:28:53.276046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-12 11:28:53.276162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-12 11:28:53.276194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-12 11:28:53.276328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-12 11:28:53.276360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-12 11:28:53.276524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-12 11:28:53.276557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-12 11:28:53.276661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-12 11:28:53.276694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-12 11:28:53.276837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-12 11:28:53.276881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-12 11:28:53.276998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-12 11:28:53.277031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-12 11:28:53.277131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-12 11:28:53.277163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-12 11:28:53.277277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-12 11:28:53.277310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.392 [2024-07-12 11:28:53.277452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.392 [2024-07-12 11:28:53.277484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.392 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-12 11:28:53.277622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-12 11:28:53.277655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-12 11:28:53.277788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-12 11:28:53.277820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-12 11:28:53.277934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-12 11:28:53.277968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-12 11:28:53.278063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-12 11:28:53.278096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-12 11:28:53.278238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-12 11:28:53.278270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-12 11:28:53.278377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-12 11:28:53.278410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-12 11:28:53.278551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-12 11:28:53.278583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-12 11:28:53.278716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-12 11:28:53.278748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-12 11:28:53.278849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-12 11:28:53.278892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-12 11:28:53.279012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-12 11:28:53.279044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-12 11:28:53.279183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-12 11:28:53.279216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-12 11:28:53.279344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-12 11:28:53.279406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-12 11:28:53.279663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-12 11:28:53.279726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-12 11:28:53.280023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-12 11:28:53.280088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-12 11:28:53.280389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-12 11:28:53.280452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-12 11:28:53.280667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-12 11:28:53.280729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-12 11:28:53.281031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-12 11:28:53.281095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-12 11:28:53.281310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-12 11:28:53.281373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-12 11:28:53.281569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-12 11:28:53.281631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-12 11:28:53.281887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-12 11:28:53.281952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-12 11:28:53.282175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-12 11:28:53.282240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-12 11:28:53.282526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-12 11:28:53.282589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-12 11:28:53.282891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-12 11:28:53.282956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-12 11:28:53.283217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-12 11:28:53.283280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-12 11:28:53.283532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-12 11:28:53.283604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-12 11:28:53.283797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-12 11:28:53.283860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-12 11:28:53.284106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-12 11:28:53.284170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-12 11:28:53.284394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-12 11:28:53.284456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-12 11:28:53.284748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-12 11:28:53.284811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-12 11:28:53.285091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-12 11:28:53.285157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-12 11:28:53.285409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-12 11:28:53.285472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-12 11:28:53.285759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-12 11:28:53.285821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-12 11:28:53.286038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-12 11:28:53.286104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-12 11:28:53.286329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-12 11:28:53.286395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-12 11:28:53.286684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-12 11:28:53.286747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-12 11:28:53.286996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-12 11:28:53.287062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-12 11:28:53.287290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-12 11:28:53.287352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-12 11:28:53.287606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-12 11:28:53.287669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-12 11:28:53.287970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-12 11:28:53.288034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-12 11:28:53.288286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.393 [2024-07-12 11:28:53.288348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.393 qpair failed and we were unable to recover it. 00:24:27.393 [2024-07-12 11:28:53.288635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-12 11:28:53.288698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-12 11:28:53.288960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-12 11:28:53.289023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-12 11:28:53.289269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-12 11:28:53.289332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-12 11:28:53.289586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-12 11:28:53.289648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-12 11:28:53.289859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-12 11:28:53.289936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-12 11:28:53.290219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-12 11:28:53.290281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-12 11:28:53.290528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-12 11:28:53.290589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-12 11:28:53.290833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-12 11:28:53.290915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-12 11:28:53.291217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-12 11:28:53.291280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-12 11:28:53.291473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-12 11:28:53.291535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-12 11:28:53.291782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-12 11:28:53.291848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-12 11:28:53.292176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-12 11:28:53.292240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-12 11:28:53.292496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-12 11:28:53.292558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-12 11:28:53.292803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-12 11:28:53.292895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-12 11:28:53.293151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-12 11:28:53.293214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-12 11:28:53.293459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-12 11:28:53.293522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-12 11:28:53.293779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-12 11:28:53.293841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-12 11:28:53.294064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-12 11:28:53.294127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-12 11:28:53.294414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-12 11:28:53.294476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-12 11:28:53.294678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-12 11:28:53.294743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-12 11:28:53.295027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-12 11:28:53.295093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-12 11:28:53.295351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-12 11:28:53.295413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-12 11:28:53.295660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-12 11:28:53.295721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-12 11:28:53.295965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-12 11:28:53.296029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-12 11:28:53.296267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-12 11:28:53.296340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-12 11:28:53.296607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-12 11:28:53.296670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-12 11:28:53.296885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-12 11:28:53.296949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-12 11:28:53.297193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-12 11:28:53.297255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-12 11:28:53.297446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-12 11:28:53.297508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-12 11:28:53.297729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-12 11:28:53.297792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-12 11:28:53.298050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-12 11:28:53.298114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-12 11:28:53.298401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-12 11:28:53.298463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-12 11:28:53.298714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-12 11:28:53.298776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-12 11:28:53.299046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-12 11:28:53.299111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-12 11:28:53.299368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-12 11:28:53.299431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-12 11:28:53.299640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-12 11:28:53.299705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-12 11:28:53.299947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-12 11:28:53.300013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-12 11:28:53.300230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-12 11:28:53.300292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-12 11:28:53.300589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-12 11:28:53.300652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-12 11:28:53.300894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-12 11:28:53.300958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-12 11:28:53.301195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.394 [2024-07-12 11:28:53.301258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.394 qpair failed and we were unable to recover it. 00:24:27.394 [2024-07-12 11:28:53.301508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-12 11:28:53.301575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-12 11:28:53.301832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-12 11:28:53.301915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-12 11:28:53.302171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-12 11:28:53.302237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-12 11:28:53.302477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-12 11:28:53.302543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-12 11:28:53.302802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-12 11:28:53.302881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-12 11:28:53.303180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-12 11:28:53.303244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-12 11:28:53.303489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-12 11:28:53.303551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-12 11:28:53.303800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-12 11:28:53.303861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-12 11:28:53.304166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-12 11:28:53.304228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-12 11:28:53.304512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-12 11:28:53.304575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-12 11:28:53.304837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-12 11:28:53.304922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-12 11:28:53.305212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-12 11:28:53.305275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-12 11:28:53.305537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-12 11:28:53.305599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-12 11:28:53.305809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-12 11:28:53.305889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-12 11:28:53.306138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-12 11:28:53.306204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-12 11:28:53.306502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-12 11:28:53.306564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-12 11:28:53.306777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-12 11:28:53.306842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-12 11:28:53.307125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-12 11:28:53.307188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-12 11:28:53.307481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-12 11:28:53.307542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-12 11:28:53.307826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-12 11:28:53.307924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-12 11:28:53.308147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-12 11:28:53.308213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-12 11:28:53.308504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-12 11:28:53.308563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-12 11:28:53.308801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-12 11:28:53.308863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-12 11:28:53.309139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-12 11:28:53.309210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-12 11:28:53.309500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-12 11:28:53.309561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-12 11:28:53.309858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-12 11:28:53.309938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-12 11:28:53.310231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-12 11:28:53.310295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-12 11:28:53.310604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-12 11:28:53.310668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-12 11:28:53.310954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-12 11:28:53.311021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-12 11:28:53.311277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-12 11:28:53.311341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-12 11:28:53.311629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-12 11:28:53.311693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-12 11:28:53.311965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-12 11:28:53.312030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-12 11:28:53.312262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-12 11:28:53.312325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-12 11:28:53.312574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-12 11:28:53.312640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-12 11:28:53.312852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.395 [2024-07-12 11:28:53.312931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.395 qpair failed and we were unable to recover it. 00:24:27.395 [2024-07-12 11:28:53.313144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-12 11:28:53.313207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-12 11:28:53.313503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-12 11:28:53.313567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-12 11:28:53.313830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-12 11:28:53.313909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-12 11:28:53.314176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-12 11:28:53.314240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-12 11:28:53.314489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-12 11:28:53.314556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-12 11:28:53.314823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-12 11:28:53.314908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-12 11:28:53.315200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-12 11:28:53.315264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-12 11:28:53.315517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-12 11:28:53.315580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-12 11:28:53.315831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-12 11:28:53.315946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-12 11:28:53.316164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-12 11:28:53.316230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-12 11:28:53.316487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-12 11:28:53.316551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-12 11:28:53.316837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-12 11:28:53.316927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-12 11:28:53.317184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-12 11:28:53.317248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-12 11:28:53.317493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-12 11:28:53.317558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-12 11:28:53.317793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-12 11:28:53.317857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-12 11:28:53.318142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-12 11:28:53.318207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-12 11:28:53.318446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-12 11:28:53.318512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-12 11:28:53.318779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-12 11:28:53.318843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-12 11:28:53.319127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-12 11:28:53.319191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-12 11:28:53.319482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-12 11:28:53.319545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-12 11:28:53.319794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-12 11:28:53.319860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-12 11:28:53.320114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-12 11:28:53.320179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-12 11:28:53.320396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-12 11:28:53.320462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-12 11:28:53.320720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-12 11:28:53.320784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-12 11:28:53.321022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-12 11:28:53.321090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-12 11:28:53.321349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-12 11:28:53.321414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-12 11:28:53.321671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-12 11:28:53.321733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-12 11:28:53.321964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-12 11:28:53.322030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-12 11:28:53.322294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-12 11:28:53.322368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-12 11:28:53.322623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-12 11:28:53.322687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-12 11:28:53.322915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-12 11:28:53.322981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-12 11:28:53.323236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-12 11:28:53.323303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-12 11:28:53.323509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-12 11:28:53.323574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-12 11:28:53.323793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-12 11:28:53.323861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-12 11:28:53.324132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-12 11:28:53.324198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-12 11:28:53.324462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-12 11:28:53.324526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-12 11:28:53.324745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-12 11:28:53.324810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-12 11:28:53.325052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-12 11:28:53.325119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-12 11:28:53.325412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-12 11:28:53.325476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-12 11:28:53.325711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-12 11:28:53.325774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.396 [2024-07-12 11:28:53.326009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.396 [2024-07-12 11:28:53.326076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.396 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-12 11:28:53.326271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-12 11:28:53.326336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-12 11:28:53.326643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-12 11:28:53.326706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-12 11:28:53.327009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-12 11:28:53.327076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-12 11:28:53.327331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-12 11:28:53.327398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-12 11:28:53.327646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-12 11:28:53.327710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-12 11:28:53.327924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-12 11:28:53.327990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-12 11:28:53.328205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-12 11:28:53.328269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-12 11:28:53.328572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-12 11:28:53.328636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-12 11:28:53.328902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-12 11:28:53.328968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-12 11:28:53.329228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-12 11:28:53.329292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-12 11:28:53.329559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-12 11:28:53.329623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-12 11:28:53.329808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-12 11:28:53.329904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-12 11:28:53.330156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-12 11:28:53.330220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-12 11:28:53.330507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-12 11:28:53.330571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-12 11:28:53.330901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-12 11:28:53.330969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-12 11:28:53.331223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-12 11:28:53.331287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-12 11:28:53.331558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-12 11:28:53.331621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-12 11:28:53.331918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-12 11:28:53.331984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-12 11:28:53.332208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-12 11:28:53.332271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-12 11:28:53.332484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-12 11:28:53.332548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-12 11:28:53.332886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-12 11:28:53.332953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-12 11:28:53.333191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-12 11:28:53.333254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-12 11:28:53.333553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-12 11:28:53.333617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-12 11:28:53.333816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-12 11:28:53.333896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-12 11:28:53.334156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-12 11:28:53.334220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-12 11:28:53.334478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-12 11:28:53.334543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-12 11:28:53.334780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-12 11:28:53.334844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-12 11:28:53.335116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-12 11:28:53.335190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-12 11:28:53.335403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-12 11:28:53.335470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-12 11:28:53.335696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-12 11:28:53.335764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-12 11:28:53.336042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-12 11:28:53.336108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-12 11:28:53.336319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-12 11:28:53.336387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-12 11:28:53.336614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-12 11:28:53.336678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-12 11:28:53.336918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-12 11:28:53.336985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-12 11:28:53.337195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-12 11:28:53.337263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-12 11:28:53.337547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-12 11:28:53.337611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-12 11:28:53.337881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-12 11:28:53.337948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-12 11:28:53.338149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-12 11:28:53.338216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-12 11:28:53.338480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-12 11:28:53.338545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-12 11:28:53.338799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.397 [2024-07-12 11:28:53.338863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.397 qpair failed and we were unable to recover it. 00:24:27.397 [2024-07-12 11:28:53.339100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-12 11:28:53.339165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-12 11:28:53.339431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-12 11:28:53.339495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-12 11:28:53.339687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-12 11:28:53.339754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-12 11:28:53.340030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-12 11:28:53.340095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-12 11:28:53.340386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-12 11:28:53.340450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-12 11:28:53.340693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-12 11:28:53.340757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-12 11:28:53.340983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-12 11:28:53.341050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-12 11:28:53.341303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-12 11:28:53.341367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-12 11:28:53.341667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-12 11:28:53.341731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-12 11:28:53.342018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-12 11:28:53.342084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-12 11:28:53.342389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-12 11:28:53.342453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-12 11:28:53.342711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-12 11:28:53.342779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-12 11:28:53.343006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-12 11:28:53.343073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-12 11:28:53.343318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-12 11:28:53.343382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-12 11:28:53.343610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-12 11:28:53.343675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-12 11:28:53.343938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-12 11:28:53.344004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-12 11:28:53.344257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-12 11:28:53.344321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-12 11:28:53.344567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-12 11:28:53.344631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-12 11:28:53.344933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-12 11:28:53.344998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-12 11:28:53.345240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-12 11:28:53.345301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-12 11:28:53.345593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-12 11:28:53.345658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-12 11:28:53.345954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-12 11:28:53.346022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-12 11:28:53.346315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-12 11:28:53.346378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-12 11:28:53.346639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-12 11:28:53.346703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-12 11:28:53.346919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-12 11:28:53.346988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-12 11:28:53.347224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-12 11:28:53.347288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-12 11:28:53.347573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-12 11:28:53.347638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-12 11:28:53.347944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-12 11:28:53.348021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-12 11:28:53.348345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-12 11:28:53.348409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-12 11:28:53.348662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-12 11:28:53.348726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-12 11:28:53.349017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-12 11:28:53.349083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-12 11:28:53.349283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-12 11:28:53.349349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-12 11:28:53.349581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-12 11:28:53.349646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-12 11:28:53.349912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-12 11:28:53.349979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-12 11:28:53.350267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-12 11:28:53.350332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-12 11:28:53.350601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-12 11:28:53.350665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-12 11:28:53.350958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-12 11:28:53.351024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-12 11:28:53.351249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-12 11:28:53.351314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-12 11:28:53.351567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-12 11:28:53.351632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-12 11:28:53.351852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-12 11:28:53.351932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-12 11:28:53.352194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.398 [2024-07-12 11:28:53.352258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.398 qpair failed and we were unable to recover it. 00:24:27.398 [2024-07-12 11:28:53.352518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-12 11:28:53.352581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-12 11:28:53.352826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-12 11:28:53.352922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-12 11:28:53.353221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-12 11:28:53.353286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-12 11:28:53.353546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-12 11:28:53.353613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-12 11:28:53.353933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-12 11:28:53.354000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-12 11:28:53.354299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-12 11:28:53.354363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-12 11:28:53.354614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-12 11:28:53.354678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-12 11:28:53.354966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-12 11:28:53.355032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-12 11:28:53.355326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-12 11:28:53.355390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-12 11:28:53.355685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-12 11:28:53.355748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-12 11:28:53.356031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-12 11:28:53.356097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-12 11:28:53.356360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-12 11:28:53.356425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-12 11:28:53.356678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-12 11:28:53.356742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-12 11:28:53.357041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-12 11:28:53.357106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-12 11:28:53.357361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-12 11:28:53.357425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-12 11:28:53.357718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-12 11:28:53.357781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-12 11:28:53.358058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-12 11:28:53.358124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-12 11:28:53.358411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-12 11:28:53.358474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-12 11:28:53.358715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-12 11:28:53.358779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-12 11:28:53.359097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-12 11:28:53.359164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-12 11:28:53.359408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-12 11:28:53.359472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-12 11:28:53.359763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-12 11:28:53.359827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-12 11:28:53.360104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-12 11:28:53.360170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-12 11:28:53.360418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-12 11:28:53.360481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-12 11:28:53.360753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-12 11:28:53.360818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-12 11:28:53.361069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-12 11:28:53.361134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-12 11:28:53.361384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-12 11:28:53.361460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-12 11:28:53.361673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-12 11:28:53.361737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-12 11:28:53.361991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-12 11:28:53.362058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-12 11:28:53.362266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-12 11:28:53.362330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-12 11:28:53.362534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-12 11:28:53.362598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-12 11:28:53.362832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-12 11:28:53.362913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-12 11:28:53.363156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-12 11:28:53.363220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-12 11:28:53.363467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.399 [2024-07-12 11:28:53.363531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.399 qpair failed and we were unable to recover it. 00:24:27.399 [2024-07-12 11:28:53.363822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-12 11:28:53.363900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-12 11:28:53.364163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-12 11:28:53.364227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-12 11:28:53.364480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-12 11:28:53.364547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-12 11:28:53.364801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-12 11:28:53.364864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-12 11:28:53.365153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-12 11:28:53.365220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-12 11:28:53.365479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-12 11:28:53.365543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-12 11:28:53.365813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-12 11:28:53.365897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-12 11:28:53.366158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-12 11:28:53.366223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-12 11:28:53.366478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-12 11:28:53.366543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-12 11:28:53.366835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-12 11:28:53.366918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-12 11:28:53.367184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-12 11:28:53.367249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-12 11:28:53.367501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-12 11:28:53.367566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-12 11:28:53.367809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-12 11:28:53.367893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-12 11:28:53.368158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-12 11:28:53.368224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-12 11:28:53.368518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-12 11:28:53.368584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-12 11:28:53.368835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-12 11:28:53.368939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-12 11:28:53.369237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-12 11:28:53.369301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-12 11:28:53.369610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-12 11:28:53.369674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-12 11:28:53.369932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-12 11:28:53.370001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-12 11:28:53.370316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-12 11:28:53.370380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-12 11:28:53.370637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-12 11:28:53.370702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-12 11:28:53.370916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-12 11:28:53.370984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-12 11:28:53.371168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-12 11:28:53.371231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-12 11:28:53.371448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-12 11:28:53.371515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-12 11:28:53.371815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-12 11:28:53.371892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-12 11:28:53.372196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-12 11:28:53.372261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-12 11:28:53.372562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-12 11:28:53.372626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-12 11:28:53.372916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-12 11:28:53.372983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-12 11:28:53.373273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-12 11:28:53.373337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-12 11:28:53.373625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-12 11:28:53.373689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-12 11:28:53.373986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-12 11:28:53.374054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-12 11:28:53.374318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-12 11:28:53.374382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-12 11:28:53.374585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-12 11:28:53.374662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-12 11:28:53.374960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-12 11:28:53.375026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-12 11:28:53.375327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-12 11:28:53.375392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-12 11:28:53.375683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-12 11:28:53.375747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-12 11:28:53.376035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-12 11:28:53.376100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-12 11:28:53.376293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-12 11:28:53.376357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-12 11:28:53.376571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-12 11:28:53.376637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-12 11:28:53.376896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-12 11:28:53.376964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.400 [2024-07-12 11:28:53.377255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.400 [2024-07-12 11:28:53.377319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.400 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-12 11:28:53.377622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-12 11:28:53.377685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-12 11:28:53.377958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-12 11:28:53.378025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-12 11:28:53.378314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-12 11:28:53.378379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-12 11:28:53.378636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-12 11:28:53.378699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-12 11:28:53.378944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-12 11:28:53.379012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-12 11:28:53.379281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-12 11:28:53.379345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-12 11:28:53.379596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-12 11:28:53.379660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-12 11:28:53.379912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-12 11:28:53.379978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-12 11:28:53.380276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-12 11:28:53.380341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-12 11:28:53.380638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-12 11:28:53.380702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-12 11:28:53.380959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-12 11:28:53.381025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-12 11:28:53.381278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-12 11:28:53.381345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-12 11:28:53.381570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-12 11:28:53.381634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-12 11:28:53.381929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-12 11:28:53.381995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-12 11:28:53.382259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-12 11:28:53.382324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-12 11:28:53.382612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-12 11:28:53.382677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-12 11:28:53.382925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-12 11:28:53.382992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-12 11:28:53.383214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-12 11:28:53.383278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-12 11:28:53.383541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-12 11:28:53.383605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-12 11:28:53.383903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-12 11:28:53.383969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-12 11:28:53.384192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-12 11:28:53.384256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-12 11:28:53.384507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-12 11:28:53.384571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-12 11:28:53.384834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-12 11:28:53.384924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-12 11:28:53.385219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-12 11:28:53.385283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-12 11:28:53.385542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-12 11:28:53.385606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-12 11:28:53.385855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-12 11:28:53.385942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-12 11:28:53.386232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-12 11:28:53.386298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-12 11:28:53.386606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-12 11:28:53.386671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-12 11:28:53.386975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-12 11:28:53.387042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-12 11:28:53.387336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-12 11:28:53.387400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-12 11:28:53.387601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-12 11:28:53.387668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-12 11:28:53.387922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-12 11:28:53.387999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-12 11:28:53.388208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-12 11:28:53.388275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-12 11:28:53.388559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-12 11:28:53.388624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-12 11:28:53.388893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-12 11:28:53.388960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-12 11:28:53.389264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-12 11:28:53.389329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-12 11:28:53.389583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-12 11:28:53.389648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-12 11:28:53.389948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-12 11:28:53.390014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-12 11:28:53.390317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-12 11:28:53.390382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-12 11:28:53.390679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.401 [2024-07-12 11:28:53.390743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.401 qpair failed and we were unable to recover it. 00:24:27.401 [2024-07-12 11:28:53.391050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-12 11:28:53.391117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-12 11:28:53.391412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-12 11:28:53.391476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-12 11:28:53.391769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-12 11:28:53.391834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-12 11:28:53.392097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-12 11:28:53.392164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-12 11:28:53.392413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-12 11:28:53.392478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-12 11:28:53.392740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-12 11:28:53.392806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-12 11:28:53.393139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-12 11:28:53.393206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-12 11:28:53.393467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-12 11:28:53.393531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-12 11:28:53.393772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-12 11:28:53.393835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-12 11:28:53.394109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-12 11:28:53.394175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-12 11:28:53.394419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-12 11:28:53.394483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-12 11:28:53.394772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-12 11:28:53.394837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-12 11:28:53.395156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-12 11:28:53.395219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-12 11:28:53.395506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-12 11:28:53.395570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-12 11:28:53.395802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-12 11:28:53.395883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-12 11:28:53.396177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-12 11:28:53.396242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-12 11:28:53.396466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-12 11:28:53.396531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-12 11:28:53.396819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-12 11:28:53.396901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-12 11:28:53.397213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-12 11:28:53.397277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-12 11:28:53.397566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-12 11:28:53.397630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-12 11:28:53.397936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-12 11:28:53.398004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-12 11:28:53.398303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-12 11:28:53.398367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-12 11:28:53.398578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-12 11:28:53.398642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-12 11:28:53.398884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-12 11:28:53.398949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-12 11:28:53.399231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-12 11:28:53.399295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-12 11:28:53.399547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-12 11:28:53.399614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-12 11:28:53.399861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-12 11:28:53.399940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-12 11:28:53.400147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-12 11:28:53.400211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-12 11:28:53.400500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-12 11:28:53.400564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-12 11:28:53.400889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-12 11:28:53.400956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-12 11:28:53.401266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-12 11:28:53.401331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-12 11:28:53.401626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-12 11:28:53.401701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-12 11:28:53.401966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-12 11:28:53.402034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-12 11:28:53.402296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-12 11:28:53.402359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-12 11:28:53.402558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-12 11:28:53.402622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-12 11:28:53.402894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-12 11:28:53.402963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-12 11:28:53.403225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-12 11:28:53.403289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-12 11:28:53.403575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-12 11:28:53.403639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-12 11:28:53.403900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-12 11:28:53.403968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-12 11:28:53.404211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-12 11:28:53.404277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-12 11:28:53.404563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.402 [2024-07-12 11:28:53.404627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.402 qpair failed and we were unable to recover it. 00:24:27.402 [2024-07-12 11:28:53.404941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-12 11:28:53.405009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-12 11:28:53.405296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-12 11:28:53.405361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-12 11:28:53.405559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-12 11:28:53.405626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-12 11:28:53.405850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-12 11:28:53.405935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-12 11:28:53.406206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-12 11:28:53.406271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-12 11:28:53.406527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-12 11:28:53.406591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-12 11:28:53.406895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-12 11:28:53.406961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-12 11:28:53.407216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-12 11:28:53.407280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-12 11:28:53.407544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-12 11:28:53.407607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-12 11:28:53.407899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-12 11:28:53.407966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-12 11:28:53.408181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-12 11:28:53.408249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-12 11:28:53.408510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-12 11:28:53.408575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-12 11:28:53.408833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-12 11:28:53.408928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-12 11:28:53.409183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-12 11:28:53.409250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-12 11:28:53.409473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-12 11:28:53.409537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-12 11:28:53.409803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-12 11:28:53.409881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-12 11:28:53.410148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-12 11:28:53.410221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-12 11:28:53.410529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-12 11:28:53.410595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-12 11:28:53.410914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-12 11:28:53.410980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-12 11:28:53.411245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-12 11:28:53.411309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-12 11:28:53.411559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-12 11:28:53.411623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-12 11:28:53.411834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-12 11:28:53.411918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-12 11:28:53.412215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-12 11:28:53.412279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-12 11:28:53.412596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-12 11:28:53.412660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-12 11:28:53.412963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-12 11:28:53.413030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-12 11:28:53.413328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-12 11:28:53.413393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-12 11:28:53.413644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-12 11:28:53.413707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-12 11:28:53.414004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-12 11:28:53.414070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-12 11:28:53.414359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-12 11:28:53.414423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-12 11:28:53.414668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-12 11:28:53.414735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-12 11:28:53.414981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-12 11:28:53.415058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-12 11:28:53.415347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-12 11:28:53.415412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-12 11:28:53.415602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-12 11:28:53.415665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-12 11:28:53.415879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-12 11:28:53.415945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-12 11:28:53.416180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-12 11:28:53.416245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-12 11:28:53.416446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-12 11:28:53.416511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-12 11:28:53.416792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-12 11:28:53.416855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-12 11:28:53.417187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-12 11:28:53.417252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-12 11:28:53.417502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-12 11:28:53.417567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-12 11:28:53.417850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-12 11:28:53.417934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-12 11:28:53.418212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.403 [2024-07-12 11:28:53.418277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.403 qpair failed and we were unable to recover it. 00:24:27.403 [2024-07-12 11:28:53.418529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-12 11:28:53.418596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-12 11:28:53.418913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-12 11:28:53.418980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-12 11:28:53.419287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-12 11:28:53.419353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-12 11:28:53.419605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-12 11:28:53.419671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-12 11:28:53.419923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-12 11:28:53.420010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-12 11:28:53.420263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-12 11:28:53.420327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-12 11:28:53.420560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-12 11:28:53.420627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-12 11:28:53.420928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-12 11:28:53.420994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-12 11:28:53.421303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-12 11:28:53.421368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-12 11:28:53.421614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-12 11:28:53.421678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-12 11:28:53.421967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-12 11:28:53.422033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-12 11:28:53.422280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-12 11:28:53.422343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-12 11:28:53.422665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-12 11:28:53.422741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-12 11:28:53.423032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-12 11:28:53.423111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-12 11:28:53.423443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-12 11:28:53.423509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-12 11:28:53.423811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-12 11:28:53.423895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-12 11:28:53.424181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-12 11:28:53.424246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-12 11:28:53.424553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-12 11:28:53.424617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-12 11:28:53.424920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-12 11:28:53.424987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-12 11:28:53.425193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-12 11:28:53.425261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-12 11:28:53.425569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-12 11:28:53.425634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-12 11:28:53.425896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-12 11:28:53.425963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-12 11:28:53.426266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-12 11:28:53.426331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-12 11:28:53.426593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-12 11:28:53.426657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-12 11:28:53.426922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-12 11:28:53.426988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-12 11:28:53.427288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-12 11:28:53.427352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-12 11:28:53.427636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-12 11:28:53.427700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-12 11:28:53.427993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-12 11:28:53.428059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-12 11:28:53.428327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-12 11:28:53.428391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-12 11:28:53.428642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-12 11:28:53.428705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-12 11:28:53.428971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-12 11:28:53.429037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-12 11:28:53.429335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-12 11:28:53.429399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-12 11:28:53.429687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.404 [2024-07-12 11:28:53.429751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.404 qpair failed and we were unable to recover it. 00:24:27.404 [2024-07-12 11:28:53.430004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-12 11:28:53.430070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-12 11:28:53.430332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-12 11:28:53.430396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-12 11:28:53.430693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-12 11:28:53.430757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-12 11:28:53.431042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-12 11:28:53.431108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-12 11:28:53.431407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-12 11:28:53.431471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-12 11:28:53.431786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-12 11:28:53.431850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-12 11:28:53.432130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-12 11:28:53.432198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-12 11:28:53.432496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-12 11:28:53.432560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-12 11:28:53.432851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-12 11:28:53.432941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-12 11:28:53.433235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-12 11:28:53.433299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-12 11:28:53.433608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-12 11:28:53.433672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-12 11:28:53.433923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-12 11:28:53.433990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-12 11:28:53.434287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-12 11:28:53.434351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-12 11:28:53.434563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-12 11:28:53.434626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-12 11:28:53.434915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-12 11:28:53.434982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-12 11:28:53.435278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-12 11:28:53.435342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-12 11:28:53.435647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-12 11:28:53.435710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-12 11:28:53.436004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-12 11:28:53.436070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-12 11:28:53.436381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-12 11:28:53.436445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-12 11:28:53.436694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-12 11:28:53.436757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-12 11:28:53.437066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-12 11:28:53.437132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-12 11:28:53.437431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-12 11:28:53.437498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-12 11:28:53.437793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-12 11:28:53.437857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-12 11:28:53.438131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-12 11:28:53.438206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-12 11:28:53.438516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-12 11:28:53.438580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-12 11:28:53.438834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-12 11:28:53.438916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-12 11:28:53.439224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-12 11:28:53.439289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-12 11:28:53.439586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-12 11:28:53.439649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-12 11:28:53.439923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-12 11:28:53.439990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-12 11:28:53.440278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-12 11:28:53.440342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-12 11:28:53.440638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-12 11:28:53.440702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-12 11:28:53.440990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-12 11:28:53.441056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-12 11:28:53.441314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-12 11:28:53.441378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-12 11:28:53.441677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-12 11:28:53.441740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-12 11:28:53.442000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-12 11:28:53.442066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-12 11:28:53.442331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-12 11:28:53.442395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-12 11:28:53.442682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-12 11:28:53.442746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-12 11:28:53.443061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-12 11:28:53.443127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-12 11:28:53.443395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-12 11:28:53.443459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-12 11:28:53.443709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-12 11:28:53.443773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-12 11:28:53.444053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.405 [2024-07-12 11:28:53.444119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.405 qpair failed and we were unable to recover it. 00:24:27.405 [2024-07-12 11:28:53.444413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-12 11:28:53.444478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-12 11:28:53.444766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-12 11:28:53.444829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-12 11:28:53.445121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-12 11:28:53.445185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-12 11:28:53.445432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-12 11:28:53.445499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-12 11:28:53.445752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-12 11:28:53.445817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-12 11:28:53.446142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-12 11:28:53.446206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-12 11:28:53.446461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-12 11:28:53.446525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-12 11:28:53.446782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-12 11:28:53.446846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-12 11:28:53.447164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-12 11:28:53.447228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-12 11:28:53.447488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-12 11:28:53.447552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-12 11:28:53.447814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-12 11:28:53.447895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-12 11:28:53.448151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-12 11:28:53.448219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-12 11:28:53.448523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-12 11:28:53.448587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-12 11:28:53.448921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-12 11:28:53.448989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-12 11:28:53.449287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-12 11:28:53.449352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-12 11:28:53.449609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-12 11:28:53.449673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-12 11:28:53.449969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-12 11:28:53.450036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-12 11:28:53.450293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-12 11:28:53.450356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-12 11:28:53.450609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-12 11:28:53.450673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-12 11:28:53.450923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-12 11:28:53.450992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-12 11:28:53.451286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-12 11:28:53.451352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-12 11:28:53.451650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-12 11:28:53.451714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-12 11:28:53.452002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-12 11:28:53.452078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-12 11:28:53.452384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-12 11:28:53.452448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-12 11:28:53.452674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-12 11:28:53.452738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-12 11:28:53.453027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-12 11:28:53.453094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-12 11:28:53.453398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-12 11:28:53.453462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-12 11:28:53.453747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-12 11:28:53.453812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-12 11:28:53.454088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-12 11:28:53.454152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-12 11:28:53.454447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-12 11:28:53.454511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-12 11:28:53.454773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-12 11:28:53.454846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-12 11:28:53.455161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-12 11:28:53.455227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-12 11:28:53.455531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-12 11:28:53.455596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-12 11:28:53.455811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-12 11:28:53.455888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-12 11:28:53.456150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-12 11:28:53.456214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-12 11:28:53.456468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-12 11:28:53.456532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-12 11:28:53.456787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-12 11:28:53.456855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-12 11:28:53.457148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-12 11:28:53.457212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-12 11:28:53.457461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-12 11:28:53.457526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-12 11:28:53.457826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.406 [2024-07-12 11:28:53.457910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.406 qpair failed and we were unable to recover it. 00:24:27.406 [2024-07-12 11:28:53.458214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-12 11:28:53.458278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-12 11:28:53.458481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-12 11:28:53.458545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-12 11:28:53.458754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-12 11:28:53.458819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-12 11:28:53.459097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-12 11:28:53.459162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-12 11:28:53.459423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-12 11:28:53.459487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-12 11:28:53.459714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-12 11:28:53.459778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-12 11:28:53.460049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-12 11:28:53.460116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-12 11:28:53.460420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-12 11:28:53.460484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-12 11:28:53.460739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-12 11:28:53.460807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-12 11:28:53.461102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-12 11:28:53.461169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-12 11:28:53.461459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-12 11:28:53.461523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-12 11:28:53.461800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-12 11:28:53.461883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-12 11:28:53.462138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-12 11:28:53.462206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-12 11:28:53.462501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-12 11:28:53.462565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-12 11:28:53.462862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-12 11:28:53.462945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-12 11:28:53.463243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-12 11:28:53.463307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-12 11:28:53.463598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-12 11:28:53.463663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-12 11:28:53.463949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-12 11:28:53.464015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-12 11:28:53.464324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-12 11:28:53.464388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-12 11:28:53.464677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-12 11:28:53.464740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-12 11:28:53.465038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-12 11:28:53.465103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-12 11:28:53.465390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-12 11:28:53.465455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-12 11:28:53.465742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-12 11:28:53.465821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-12 11:28:53.466098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-12 11:28:53.466162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-12 11:28:53.466465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-12 11:28:53.466529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-12 11:28:53.466822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-12 11:28:53.466904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-12 11:28:53.467158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-12 11:28:53.467223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-12 11:28:53.467515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-12 11:28:53.467579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-12 11:28:53.467773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-12 11:28:53.467839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-12 11:28:53.468123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-12 11:28:53.468188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-12 11:28:53.468491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-12 11:28:53.468556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-12 11:28:53.468845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-12 11:28:53.468945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-12 11:28:53.469249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-12 11:28:53.469313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-12 11:28:53.469567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-12 11:28:53.469634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-12 11:28:53.469902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-12 11:28:53.469970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-12 11:28:53.470275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-12 11:28:53.470340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-12 11:28:53.470652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-12 11:28:53.470716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-12 11:28:53.471008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-12 11:28:53.471074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-12 11:28:53.471331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-12 11:28:53.471396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-12 11:28:53.471692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-12 11:28:53.471755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-12 11:28:53.471982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.407 [2024-07-12 11:28:53.472048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.407 qpair failed and we were unable to recover it. 00:24:27.407 [2024-07-12 11:28:53.472294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-12 11:28:53.472356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-12 11:28:53.472621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-12 11:28:53.472684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-12 11:28:53.472981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-12 11:28:53.473047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-12 11:28:53.473348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-12 11:28:53.473411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-12 11:28:53.473671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-12 11:28:53.473734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-12 11:28:53.474029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-12 11:28:53.474096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-12 11:28:53.474394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-12 11:28:53.474457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-12 11:28:53.474754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-12 11:28:53.474818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-12 11:28:53.475099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-12 11:28:53.475165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-12 11:28:53.475410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-12 11:28:53.475478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-12 11:28:53.475716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-12 11:28:53.475790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-12 11:28:53.476076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-12 11:28:53.476142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-12 11:28:53.476398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-12 11:28:53.476463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-12 11:28:53.476732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-12 11:28:53.476797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-12 11:28:53.477089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-12 11:28:53.477155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-12 11:28:53.477454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-12 11:28:53.477517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-12 11:28:53.477769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-12 11:28:53.477833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-12 11:28:53.478103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-12 11:28:53.478177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-12 11:28:53.478436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-12 11:28:53.478500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-12 11:28:53.478792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-12 11:28:53.478857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-12 11:28:53.479167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-12 11:28:53.479230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-12 11:28:53.479523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-12 11:28:53.479597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-12 11:28:53.479853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-12 11:28:53.479937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-12 11:28:53.480201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-12 11:28:53.480265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-12 11:28:53.480476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-12 11:28:53.480541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-12 11:28:53.480779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-12 11:28:53.480845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-12 11:28:53.481179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-12 11:28:53.481244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-12 11:28:53.481540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-12 11:28:53.481604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-12 11:28:53.481804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-12 11:28:53.481891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-12 11:28:53.482160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-12 11:28:53.482225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-12 11:28:53.482524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-12 11:28:53.482588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-12 11:28:53.482897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-12 11:28:53.482963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-12 11:28:53.483232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-12 11:28:53.483295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-12 11:28:53.483550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-12 11:28:53.483615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-12 11:28:53.483926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-12 11:28:53.483992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.408 qpair failed and we were unable to recover it. 00:24:27.408 [2024-07-12 11:28:53.484257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.408 [2024-07-12 11:28:53.484320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-12 11:28:53.484529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-12 11:28:53.484595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-12 11:28:53.484892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-12 11:28:53.484961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-12 11:28:53.485205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-12 11:28:53.485268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-12 11:28:53.485490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-12 11:28:53.485554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-12 11:28:53.485843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-12 11:28:53.485935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-12 11:28:53.486248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-12 11:28:53.486312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-12 11:28:53.486604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-12 11:28:53.486667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-12 11:28:53.486980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-12 11:28:53.487045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-12 11:28:53.487257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-12 11:28:53.487320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-12 11:28:53.487579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-12 11:28:53.487642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-12 11:28:53.487923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-12 11:28:53.487989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-12 11:28:53.488288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-12 11:28:53.488351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-12 11:28:53.488654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-12 11:28:53.488718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-12 11:28:53.489018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-12 11:28:53.489083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-12 11:28:53.489390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-12 11:28:53.489454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-12 11:28:53.489752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-12 11:28:53.489816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-12 11:28:53.490076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-12 11:28:53.490150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-12 11:28:53.490447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-12 11:28:53.490510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-12 11:28:53.490804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-12 11:28:53.490883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-12 11:28:53.491134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-12 11:28:53.491198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-12 11:28:53.491490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-12 11:28:53.491552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-12 11:28:53.491840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-12 11:28:53.491927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-12 11:28:53.492194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-12 11:28:53.492257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-12 11:28:53.492555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-12 11:28:53.492617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-12 11:28:53.492888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-12 11:28:53.492955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-12 11:28:53.493244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-12 11:28:53.493318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-12 11:28:53.493564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-12 11:28:53.493628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-12 11:28:53.493898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-12 11:28:53.493963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-12 11:28:53.494263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-12 11:28:53.494328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-12 11:28:53.494531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-12 11:28:53.494597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-12 11:28:53.494829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-12 11:28:53.494917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-12 11:28:53.495217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-12 11:28:53.495280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-12 11:28:53.495574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-12 11:28:53.495637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-12 11:28:53.495911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-12 11:28:53.495980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-12 11:28:53.496249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-12 11:28:53.496315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-12 11:28:53.496570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-12 11:28:53.496634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-12 11:28:53.496918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-12 11:28:53.497008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-12 11:28:53.497383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-12 11:28:53.497476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-12 11:28:53.497739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.409 [2024-07-12 11:28:53.497807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.409 qpair failed and we were unable to recover it. 00:24:27.409 [2024-07-12 11:28:53.498102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-12 11:28:53.498167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-12 11:28:53.498369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-12 11:28:53.498431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-12 11:28:53.498727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-12 11:28:53.498792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-12 11:28:53.499120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-12 11:28:53.499196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-12 11:28:53.499456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-12 11:28:53.499542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-12 11:28:53.499915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-12 11:28:53.500009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-12 11:28:53.500280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-12 11:28:53.500344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-12 11:28:53.500571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-12 11:28:53.500638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-12 11:28:53.500894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-12 11:28:53.500961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-12 11:28:53.501257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-12 11:28:53.501320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-12 11:28:53.501586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-12 11:28:53.501654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-12 11:28:53.501981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-12 11:28:53.502076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.410 [2024-07-12 11:28:53.502410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.410 [2024-07-12 11:28:53.502501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.410 qpair failed and we were unable to recover it. 00:24:27.693 [2024-07-12 11:28:53.502818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.693 [2024-07-12 11:28:53.502898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.693 qpair failed and we were unable to recover it. 00:24:27.693 [2024-07-12 11:28:53.503205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.693 [2024-07-12 11:28:53.503268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.693 qpair failed and we were unable to recover it. 00:24:27.693 [2024-07-12 11:28:53.503515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.693 [2024-07-12 11:28:53.503575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.693 qpair failed and we were unable to recover it. 00:24:27.693 [2024-07-12 11:28:53.503902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.693 [2024-07-12 11:28:53.503969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.693 qpair failed and we were unable to recover it. 00:24:27.693 [2024-07-12 11:28:53.504179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.693 [2024-07-12 11:28:53.504243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.693 qpair failed and we were unable to recover it. 00:24:27.693 [2024-07-12 11:28:53.504498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.693 [2024-07-12 11:28:53.504562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.693 qpair failed and we were unable to recover it. 00:24:27.693 [2024-07-12 11:28:53.504859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.693 [2024-07-12 11:28:53.504955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.693 qpair failed and we were unable to recover it. 00:24:27.693 [2024-07-12 11:28:53.505159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.693 [2024-07-12 11:28:53.505227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.693 qpair failed and we were unable to recover it. 00:24:27.693 [2024-07-12 11:28:53.505469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.693 [2024-07-12 11:28:53.505547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.693 qpair failed and we were unable to recover it. 00:24:27.693 [2024-07-12 11:28:53.505825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.693 [2024-07-12 11:28:53.505936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.693 qpair failed and we were unable to recover it. 00:24:27.693 [2024-07-12 11:28:53.506213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.693 [2024-07-12 11:28:53.506276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.693 qpair failed and we were unable to recover it. 00:24:27.693 [2024-07-12 11:28:53.506516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.693 [2024-07-12 11:28:53.506580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.693 qpair failed and we were unable to recover it. 00:24:27.693 [2024-07-12 11:28:53.506851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.693 [2024-07-12 11:28:53.506945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.693 qpair failed and we were unable to recover it. 00:24:27.693 [2024-07-12 11:28:53.507165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.693 [2024-07-12 11:28:53.507240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.693 qpair failed and we were unable to recover it. 00:24:27.693 [2024-07-12 11:28:53.507502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.693 [2024-07-12 11:28:53.507567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.693 qpair failed and we were unable to recover it. 00:24:27.693 [2024-07-12 11:28:53.507857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.693 [2024-07-12 11:28:53.507947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.693 qpair failed and we were unable to recover it. 00:24:27.693 [2024-07-12 11:28:53.508155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.693 [2024-07-12 11:28:53.508241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.693 qpair failed and we were unable to recover it. 00:24:27.693 [2024-07-12 11:28:53.508557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.693 [2024-07-12 11:28:53.508641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.693 qpair failed and we were unable to recover it. 00:24:27.693 [2024-07-12 11:28:53.508910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.693 [2024-07-12 11:28:53.508994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.693 qpair failed and we were unable to recover it. 00:24:27.693 [2024-07-12 11:28:53.509313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.693 [2024-07-12 11:28:53.509403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.693 qpair failed and we were unable to recover it. 00:24:27.693 [2024-07-12 11:28:53.509764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.693 [2024-07-12 11:28:53.509854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.693 qpair failed and we were unable to recover it. 00:24:27.693 [2024-07-12 11:28:53.510243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.693 [2024-07-12 11:28:53.510321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.693 qpair failed and we were unable to recover it. 00:24:27.693 [2024-07-12 11:28:53.510640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.693 [2024-07-12 11:28:53.510722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.694 qpair failed and we were unable to recover it. 00:24:27.694 [2024-07-12 11:28:53.511031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-12 11:28:53.511126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.694 qpair failed and we were unable to recover it. 00:24:27.694 [2024-07-12 11:28:53.511471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-12 11:28:53.511557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.694 qpair failed and we were unable to recover it. 00:24:27.694 [2024-07-12 11:28:53.511949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-12 11:28:53.512041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.694 qpair failed and we were unable to recover it. 00:24:27.694 [2024-07-12 11:28:53.512348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-12 11:28:53.512440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.694 qpair failed and we were unable to recover it. 00:24:27.694 [2024-07-12 11:28:53.512736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-12 11:28:53.512829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.694 qpair failed and we were unable to recover it. 00:24:27.694 [2024-07-12 11:28:53.513164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-12 11:28:53.513256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.694 qpair failed and we were unable to recover it. 00:24:27.694 [2024-07-12 11:28:53.513530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-12 11:28:53.513613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.694 qpair failed and we were unable to recover it. 00:24:27.694 [2024-07-12 11:28:53.513925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-12 11:28:53.514020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.694 qpair failed and we were unable to recover it. 00:24:27.694 [2024-07-12 11:28:53.514327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-12 11:28:53.514421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.694 qpair failed and we were unable to recover it. 00:24:27.694 [2024-07-12 11:28:53.514718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-12 11:28:53.514798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.694 qpair failed and we were unable to recover it. 00:24:27.694 [2024-07-12 11:28:53.515182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-12 11:28:53.515276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.694 qpair failed and we were unable to recover it. 00:24:27.694 [2024-07-12 11:28:53.515630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-12 11:28:53.515722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.694 qpair failed and we were unable to recover it. 00:24:27.694 [2024-07-12 11:28:53.516062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-12 11:28:53.516157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.694 qpair failed and we were unable to recover it. 00:24:27.694 [2024-07-12 11:28:53.516519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-12 11:28:53.516612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.694 qpair failed and we were unable to recover it. 00:24:27.694 [2024-07-12 11:28:53.516932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-12 11:28:53.517026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.694 qpair failed and we were unable to recover it. 00:24:27.694 [2024-07-12 11:28:53.517328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-12 11:28:53.517424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.694 qpair failed and we were unable to recover it. 00:24:27.694 [2024-07-12 11:28:53.517707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-12 11:28:53.517800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.694 qpair failed and we were unable to recover it. 00:24:27.694 [2024-07-12 11:28:53.518223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-12 11:28:53.518316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.694 qpair failed and we were unable to recover it. 00:24:27.694 [2024-07-12 11:28:53.518678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-12 11:28:53.518771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.694 qpair failed and we were unable to recover it. 00:24:27.694 [2024-07-12 11:28:53.519119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-12 11:28:53.519216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.694 qpair failed and we were unable to recover it. 00:24:27.694 [2024-07-12 11:28:53.519587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-12 11:28:53.519682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.694 qpair failed and we were unable to recover it. 00:24:27.694 [2024-07-12 11:28:53.520061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-12 11:28:53.520155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.694 qpair failed and we were unable to recover it. 00:24:27.694 [2024-07-12 11:28:53.520474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-12 11:28:53.520568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.694 qpair failed and we were unable to recover it. 00:24:27.694 [2024-07-12 11:28:53.520906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-12 11:28:53.521002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.694 qpair failed and we were unable to recover it. 00:24:27.694 [2024-07-12 11:28:53.521367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-12 11:28:53.521467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.694 qpair failed and we were unable to recover it. 00:24:27.694 [2024-07-12 11:28:53.521795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-12 11:28:53.521884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.694 qpair failed and we were unable to recover it. 00:24:27.694 [2024-07-12 11:28:53.522145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-12 11:28:53.522213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.694 qpair failed and we were unable to recover it. 00:24:27.694 [2024-07-12 11:28:53.522519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-12 11:28:53.522585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.694 qpair failed and we were unable to recover it. 00:24:27.694 [2024-07-12 11:28:53.522809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-12 11:28:53.522889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.694 qpair failed and we were unable to recover it. 00:24:27.694 [2024-07-12 11:28:53.523112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-12 11:28:53.523184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.694 qpair failed and we were unable to recover it. 00:24:27.694 [2024-07-12 11:28:53.523443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-12 11:28:53.523522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.694 qpair failed and we were unable to recover it. 00:24:27.694 [2024-07-12 11:28:53.523787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-12 11:28:53.523854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.694 qpair failed and we were unable to recover it. 00:24:27.694 [2024-07-12 11:28:53.526896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-12 11:28:53.526972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.694 qpair failed and we were unable to recover it. 00:24:27.694 [2024-07-12 11:28:53.527265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-12 11:28:53.527335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.694 qpair failed and we were unable to recover it. 00:24:27.694 [2024-07-12 11:28:53.527598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-12 11:28:53.527665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.694 qpair failed and we were unable to recover it. 00:24:27.694 [2024-07-12 11:28:53.527976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-12 11:28:53.528043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.694 qpair failed and we were unable to recover it. 00:24:27.694 [2024-07-12 11:28:53.528299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-12 11:28:53.528366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.694 qpair failed and we were unable to recover it. 00:24:27.694 [2024-07-12 11:28:53.528635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-12 11:28:53.528701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.694 qpair failed and we were unable to recover it. 00:24:27.694 [2024-07-12 11:28:53.528968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-12 11:28:53.529035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.694 qpair failed and we were unable to recover it. 00:24:27.694 [2024-07-12 11:28:53.531901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.694 [2024-07-12 11:28:53.531997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.694 qpair failed and we were unable to recover it. 00:24:27.695 [2024-07-12 11:28:53.532284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.695 [2024-07-12 11:28:53.532355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.695 qpair failed and we were unable to recover it. 00:24:27.695 [2024-07-12 11:28:53.532667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.695 [2024-07-12 11:28:53.532738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.695 qpair failed and we were unable to recover it. 00:24:27.695 [2024-07-12 11:28:53.533043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.695 [2024-07-12 11:28:53.533114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.695 qpair failed and we were unable to recover it. 00:24:27.695 [2024-07-12 11:28:53.533389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.695 [2024-07-12 11:28:53.533458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.695 qpair failed and we were unable to recover it. 00:24:27.695 [2024-07-12 11:28:53.533735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.695 [2024-07-12 11:28:53.533802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.695 qpair failed and we were unable to recover it. 00:24:27.695 [2024-07-12 11:28:53.534042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.695 [2024-07-12 11:28:53.534109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.695 qpair failed and we were unable to recover it. 00:24:27.695 [2024-07-12 11:28:53.535036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.695 [2024-07-12 11:28:53.535105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.695 qpair failed and we were unable to recover it. 00:24:27.695 [2024-07-12 11:28:53.535369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.695 [2024-07-12 11:28:53.535437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.695 qpair failed and we were unable to recover it. 00:24:27.695 [2024-07-12 11:28:53.535672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.695 [2024-07-12 11:28:53.535740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.695 qpair failed and we were unable to recover it. 00:24:27.695 [2024-07-12 11:28:53.535950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.695 [2024-07-12 11:28:53.536018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.695 qpair failed and we were unable to recover it. 00:24:27.695 [2024-07-12 11:28:53.536286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.695 [2024-07-12 11:28:53.536352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.695 qpair failed and we were unable to recover it. 00:24:27.695 [2024-07-12 11:28:53.536639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.695 [2024-07-12 11:28:53.536706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.695 qpair failed and we were unable to recover it. 00:24:27.695 [2024-07-12 11:28:53.536999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.695 [2024-07-12 11:28:53.537065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.695 qpair failed and we were unable to recover it. 00:24:27.695 [2024-07-12 11:28:53.537269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.695 [2024-07-12 11:28:53.537335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.695 qpair failed and we were unable to recover it. 00:24:27.695 [2024-07-12 11:28:53.537533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.695 [2024-07-12 11:28:53.537599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.695 qpair failed and we were unable to recover it. 00:24:27.695 [2024-07-12 11:28:53.537804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.695 [2024-07-12 11:28:53.537886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.695 qpair failed and we were unable to recover it. 00:24:27.695 [2024-07-12 11:28:53.538179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.695 [2024-07-12 11:28:53.538244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.695 qpair failed and we were unable to recover it. 00:24:27.695 [2024-07-12 11:28:53.538558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.695 [2024-07-12 11:28:53.538625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.695 qpair failed and we were unable to recover it. 00:24:27.695 [2024-07-12 11:28:53.538911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.695 [2024-07-12 11:28:53.538980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.695 qpair failed and we were unable to recover it. 00:24:27.695 [2024-07-12 11:28:53.539285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.695 [2024-07-12 11:28:53.539350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.695 qpair failed and we were unable to recover it. 00:24:27.695 [2024-07-12 11:28:53.539567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.695 [2024-07-12 11:28:53.539634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.695 qpair failed and we were unable to recover it. 00:24:27.695 [2024-07-12 11:28:53.539837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.695 [2024-07-12 11:28:53.539938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.695 qpair failed and we were unable to recover it. 00:24:27.695 [2024-07-12 11:28:53.540240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.695 [2024-07-12 11:28:53.540306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.695 qpair failed and we were unable to recover it. 00:24:27.695 [2024-07-12 11:28:53.540574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.695 [2024-07-12 11:28:53.540639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.695 qpair failed and we were unable to recover it. 00:24:27.695 [2024-07-12 11:28:53.540932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.695 [2024-07-12 11:28:53.541000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.695 qpair failed and we were unable to recover it. 00:24:27.695 [2024-07-12 11:28:53.541252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.695 [2024-07-12 11:28:53.541317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.695 qpair failed and we were unable to recover it. 00:24:27.695 [2024-07-12 11:28:53.541573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.695 [2024-07-12 11:28:53.541639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.695 qpair failed and we were unable to recover it. 00:24:27.695 [2024-07-12 11:28:53.541897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.695 [2024-07-12 11:28:53.541963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.695 qpair failed and we were unable to recover it. 00:24:27.695 [2024-07-12 11:28:53.542259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.695 [2024-07-12 11:28:53.542324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.695 qpair failed and we were unable to recover it. 00:24:27.695 [2024-07-12 11:28:53.542588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.695 [2024-07-12 11:28:53.542654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.695 qpair failed and we were unable to recover it. 00:24:27.695 [2024-07-12 11:28:53.542951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.695 [2024-07-12 11:28:53.543028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.695 qpair failed and we were unable to recover it. 00:24:27.695 [2024-07-12 11:28:53.543275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.695 [2024-07-12 11:28:53.543343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.695 qpair failed and we were unable to recover it. 00:24:27.695 [2024-07-12 11:28:53.543648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.695 [2024-07-12 11:28:53.543713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.695 qpair failed and we were unable to recover it. 00:24:27.695 [2024-07-12 11:28:53.544013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.695 [2024-07-12 11:28:53.544080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.695 qpair failed and we were unable to recover it. 00:24:27.695 [2024-07-12 11:28:53.544328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.695 [2024-07-12 11:28:53.544394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.695 qpair failed and we were unable to recover it. 00:24:27.695 [2024-07-12 11:28:53.544640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.695 [2024-07-12 11:28:53.544707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.695 qpair failed and we were unable to recover it. 00:24:27.695 [2024-07-12 11:28:53.544955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.695 [2024-07-12 11:28:53.545025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.695 qpair failed and we were unable to recover it. 00:24:27.695 [2024-07-12 11:28:53.545302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.695 [2024-07-12 11:28:53.545369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.695 qpair failed and we were unable to recover it. 00:24:27.695 [2024-07-12 11:28:53.545666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.695 [2024-07-12 11:28:53.545731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.695 qpair failed and we were unable to recover it. 00:24:27.695 [2024-07-12 11:28:53.545979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.695 [2024-07-12 11:28:53.546045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.695 qpair failed and we were unable to recover it. 00:24:27.695 [2024-07-12 11:28:53.546294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.695 [2024-07-12 11:28:53.546361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.696 qpair failed and we were unable to recover it. 00:24:27.696 [2024-07-12 11:28:53.546582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.696 [2024-07-12 11:28:53.546647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.696 qpair failed and we were unable to recover it. 00:24:27.696 [2024-07-12 11:28:53.546835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.696 [2024-07-12 11:28:53.546917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.696 qpair failed and we were unable to recover it. 00:24:27.696 [2024-07-12 11:28:53.547173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.696 [2024-07-12 11:28:53.547239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.696 qpair failed and we were unable to recover it. 00:24:27.696 [2024-07-12 11:28:53.547463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.696 [2024-07-12 11:28:53.547529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.696 qpair failed and we were unable to recover it. 00:24:27.696 [2024-07-12 11:28:53.547823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.696 [2024-07-12 11:28:53.547913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.696 qpair failed and we were unable to recover it. 00:24:27.696 [2024-07-12 11:28:53.548058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.696 [2024-07-12 11:28:53.548095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.696 qpair failed and we were unable to recover it. 00:24:27.696 [2024-07-12 11:28:53.548275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.696 [2024-07-12 11:28:53.548340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.696 qpair failed and we were unable to recover it. 00:24:27.696 [2024-07-12 11:28:53.548590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.696 [2024-07-12 11:28:53.548664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.696 qpair failed and we were unable to recover it. 00:24:27.696 [2024-07-12 11:28:53.548917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.696 [2024-07-12 11:28:53.548955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.696 qpair failed and we were unable to recover it. 00:24:27.696 [2024-07-12 11:28:53.549068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.696 [2024-07-12 11:28:53.549102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.696 qpair failed and we were unable to recover it. 00:24:27.696 [2024-07-12 11:28:53.549367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.696 [2024-07-12 11:28:53.549432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.696 qpair failed and we were unable to recover it. 00:24:27.696 [2024-07-12 11:28:53.549681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.696 [2024-07-12 11:28:53.549745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.696 qpair failed and we were unable to recover it. 00:24:27.696 [2024-07-12 11:28:53.549986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.696 [2024-07-12 11:28:53.550024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.696 qpair failed and we were unable to recover it. 00:24:27.696 [2024-07-12 11:28:53.550182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.696 [2024-07-12 11:28:53.550218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.696 qpair failed and we were unable to recover it. 00:24:27.696 [2024-07-12 11:28:53.550475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.696 [2024-07-12 11:28:53.550540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.696 qpair failed and we were unable to recover it. 00:24:27.696 [2024-07-12 11:28:53.550836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.696 [2024-07-12 11:28:53.550929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.696 qpair failed and we were unable to recover it. 00:24:27.696 [2024-07-12 11:28:53.551093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.696 [2024-07-12 11:28:53.551130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.696 qpair failed and we were unable to recover it. 00:24:27.696 [2024-07-12 11:28:53.551416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.696 [2024-07-12 11:28:53.551480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.696 qpair failed and we were unable to recover it. 00:24:27.696 [2024-07-12 11:28:53.551682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.696 [2024-07-12 11:28:53.551750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.696 qpair failed and we were unable to recover it. 00:24:27.696 [2024-07-12 11:28:53.551960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.696 [2024-07-12 11:28:53.551997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.696 qpair failed and we were unable to recover it. 00:24:27.696 [2024-07-12 11:28:53.552175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.696 [2024-07-12 11:28:53.552210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.696 qpair failed and we were unable to recover it. 00:24:27.696 [2024-07-12 11:28:53.552316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.696 [2024-07-12 11:28:53.552351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.696 qpair failed and we were unable to recover it. 00:24:27.696 [2024-07-12 11:28:53.552509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.696 [2024-07-12 11:28:53.552544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.696 qpair failed and we were unable to recover it. 00:24:27.696 [2024-07-12 11:28:53.552797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.696 [2024-07-12 11:28:53.552862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.696 qpair failed and we were unable to recover it. 00:24:27.696 [2024-07-12 11:28:53.553065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.696 [2024-07-12 11:28:53.553103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.696 qpair failed and we were unable to recover it. 00:24:27.696 [2024-07-12 11:28:53.553372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.696 [2024-07-12 11:28:53.553438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.696 qpair failed and we were unable to recover it. 00:24:27.696 [2024-07-12 11:28:53.553747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.696 [2024-07-12 11:28:53.553812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.696 qpair failed and we were unable to recover it. 00:24:27.696 [2024-07-12 11:28:53.554064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.696 [2024-07-12 11:28:53.554102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.696 qpair failed and we were unable to recover it. 00:24:27.696 [2024-07-12 11:28:53.554252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.696 [2024-07-12 11:28:53.554314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.696 qpair failed and we were unable to recover it. 00:24:27.696 [2024-07-12 11:28:53.554581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.696 [2024-07-12 11:28:53.554659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.696 qpair failed and we were unable to recover it. 00:24:27.696 [2024-07-12 11:28:53.554946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.696 [2024-07-12 11:28:53.554984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.696 qpair failed and we were unable to recover it. 00:24:27.696 [2024-07-12 11:28:53.555127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.696 [2024-07-12 11:28:53.555187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.696 qpair failed and we were unable to recover it. 00:24:27.696 [2024-07-12 11:28:53.555450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.696 [2024-07-12 11:28:53.555515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.696 qpair failed and we were unable to recover it. 00:24:27.696 [2024-07-12 11:28:53.555753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.696 [2024-07-12 11:28:53.555819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.696 qpair failed and we were unable to recover it. 00:24:27.696 [2024-07-12 11:28:53.556066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.696 [2024-07-12 11:28:53.556104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.696 qpair failed and we were unable to recover it. 00:24:27.696 [2024-07-12 11:28:53.556386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.696 [2024-07-12 11:28:53.556452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.696 qpair failed and we were unable to recover it. 00:24:27.696 [2024-07-12 11:28:53.556686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.696 [2024-07-12 11:28:53.556753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.696 qpair failed and we were unable to recover it. 00:24:27.696 [2024-07-12 11:28:53.556951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.696 [2024-07-12 11:28:53.556990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.696 qpair failed and we were unable to recover it. 00:24:27.696 [2024-07-12 11:28:53.557165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.696 [2024-07-12 11:28:53.557230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.696 qpair failed and we were unable to recover it. 00:24:27.696 [2024-07-12 11:28:53.557415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.696 [2024-07-12 11:28:53.557482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.696 qpair failed and we were unable to recover it. 00:24:27.696 [2024-07-12 11:28:53.557741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.697 [2024-07-12 11:28:53.557807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.697 qpair failed and we were unable to recover it. 00:24:27.697 [2024-07-12 11:28:53.558030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.697 [2024-07-12 11:28:53.558105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.697 qpair failed and we were unable to recover it. 00:24:27.697 [2024-07-12 11:28:53.558400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.697 [2024-07-12 11:28:53.558466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.697 qpair failed and we were unable to recover it. 00:24:27.697 [2024-07-12 11:28:53.558700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.697 [2024-07-12 11:28:53.558737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.697 qpair failed and we were unable to recover it. 00:24:27.697 [2024-07-12 11:28:53.558911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.697 [2024-07-12 11:28:53.558949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.697 qpair failed and we were unable to recover it. 00:24:27.697 [2024-07-12 11:28:53.559184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.697 [2024-07-12 11:28:53.559250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.697 qpair failed and we were unable to recover it. 00:24:27.697 [2024-07-12 11:28:53.559464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.697 [2024-07-12 11:28:53.559531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.697 qpair failed and we were unable to recover it. 00:24:27.697 [2024-07-12 11:28:53.559825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.697 [2024-07-12 11:28:53.559903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.697 qpair failed and we were unable to recover it. 00:24:27.697 [2024-07-12 11:28:53.560159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.697 [2024-07-12 11:28:53.560226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.697 qpair failed and we were unable to recover it. 00:24:27.697 [2024-07-12 11:28:53.560480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.697 [2024-07-12 11:28:53.560546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.697 qpair failed and we were unable to recover it. 00:24:27.697 [2024-07-12 11:28:53.560799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.697 [2024-07-12 11:28:53.560883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.697 qpair failed and we were unable to recover it. 00:24:27.697 [2024-07-12 11:28:53.561096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.697 [2024-07-12 11:28:53.561165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.697 qpair failed and we were unable to recover it. 00:24:27.697 [2024-07-12 11:28:53.561394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.697 [2024-07-12 11:28:53.561460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.697 qpair failed and we were unable to recover it. 00:24:27.697 [2024-07-12 11:28:53.561716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.697 [2024-07-12 11:28:53.561781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.697 qpair failed and we were unable to recover it. 00:24:27.697 [2024-07-12 11:28:53.562036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.697 [2024-07-12 11:28:53.562105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.697 qpair failed and we were unable to recover it. 00:24:27.697 [2024-07-12 11:28:53.562394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.697 [2024-07-12 11:28:53.562460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.697 qpair failed and we were unable to recover it. 00:24:27.697 [2024-07-12 11:28:53.562724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.697 [2024-07-12 11:28:53.562792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.697 qpair failed and we were unable to recover it. 00:24:27.697 [2024-07-12 11:28:53.563035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.697 [2024-07-12 11:28:53.563104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.697 qpair failed and we were unable to recover it. 00:24:27.697 [2024-07-12 11:28:53.563400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.697 [2024-07-12 11:28:53.563466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.697 qpair failed and we were unable to recover it. 00:24:27.697 [2024-07-12 11:28:53.563721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.697 [2024-07-12 11:28:53.563786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.697 qpair failed and we were unable to recover it. 00:24:27.697 [2024-07-12 11:28:53.564048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.697 [2024-07-12 11:28:53.564117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.697 qpair failed and we were unable to recover it. 00:24:27.697 [2024-07-12 11:28:53.564377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.697 [2024-07-12 11:28:53.564443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.697 qpair failed and we were unable to recover it. 00:24:27.697 [2024-07-12 11:28:53.564733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.697 [2024-07-12 11:28:53.564798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.697 qpair failed and we were unable to recover it. 00:24:27.697 [2024-07-12 11:28:53.565113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.697 [2024-07-12 11:28:53.565181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.697 qpair failed and we were unable to recover it. 00:24:27.697 [2024-07-12 11:28:53.565473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.697 [2024-07-12 11:28:53.565539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.697 qpair failed and we were unable to recover it. 00:24:27.697 [2024-07-12 11:28:53.565838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.697 [2024-07-12 11:28:53.565924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.697 qpair failed and we were unable to recover it. 00:24:27.697 [2024-07-12 11:28:53.566173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.697 [2024-07-12 11:28:53.566243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.697 qpair failed and we were unable to recover it. 00:24:27.697 [2024-07-12 11:28:53.566499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.697 [2024-07-12 11:28:53.566564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.697 qpair failed and we were unable to recover it. 00:24:27.697 [2024-07-12 11:28:53.566774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.697 [2024-07-12 11:28:53.566841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.697 qpair failed and we were unable to recover it. 00:24:27.697 [2024-07-12 11:28:53.567103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.697 [2024-07-12 11:28:53.567146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.697 qpair failed and we were unable to recover it. 00:24:27.697 [2024-07-12 11:28:53.567305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.697 [2024-07-12 11:28:53.567340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.697 qpair failed and we were unable to recover it. 00:24:27.697 [2024-07-12 11:28:53.567642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.697 [2024-07-12 11:28:53.567708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.697 qpair failed and we were unable to recover it. 00:24:27.697 [2024-07-12 11:28:53.568006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.697 [2024-07-12 11:28:53.568076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.697 qpair failed and we were unable to recover it. 00:24:27.697 [2024-07-12 11:28:53.568368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.697 [2024-07-12 11:28:53.568433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.697 qpair failed and we were unable to recover it. 00:24:27.698 [2024-07-12 11:28:53.568654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.698 [2024-07-12 11:28:53.568719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.698 qpair failed and we were unable to recover it. 00:24:27.698 [2024-07-12 11:28:53.568972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.698 [2024-07-12 11:28:53.569040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.698 qpair failed and we were unable to recover it. 00:24:27.698 [2024-07-12 11:28:53.569301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.698 [2024-07-12 11:28:53.569366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.698 qpair failed and we were unable to recover it. 00:24:27.698 [2024-07-12 11:28:53.569586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.698 [2024-07-12 11:28:53.569655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.698 qpair failed and we were unable to recover it. 00:24:27.698 [2024-07-12 11:28:53.569943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.698 [2024-07-12 11:28:53.570011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.698 qpair failed and we were unable to recover it. 00:24:27.698 [2024-07-12 11:28:53.570242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.698 [2024-07-12 11:28:53.570309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.698 qpair failed and we were unable to recover it. 00:24:27.698 [2024-07-12 11:28:53.570601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.698 [2024-07-12 11:28:53.570668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.698 qpair failed and we were unable to recover it. 00:24:27.698 [2024-07-12 11:28:53.570900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.698 [2024-07-12 11:28:53.570969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.698 qpair failed and we were unable to recover it. 00:24:27.698 [2024-07-12 11:28:53.571238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.698 [2024-07-12 11:28:53.571276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.698 qpair failed and we were unable to recover it. 00:24:27.698 [2024-07-12 11:28:53.571398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.698 [2024-07-12 11:28:53.571433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.698 qpair failed and we were unable to recover it. 00:24:27.698 [2024-07-12 11:28:53.571686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.698 [2024-07-12 11:28:53.571753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.698 qpair failed and we were unable to recover it. 00:24:27.698 [2024-07-12 11:28:53.571988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.698 [2024-07-12 11:28:53.572055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.698 qpair failed and we were unable to recover it. 00:24:27.698 [2024-07-12 11:28:53.572260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.698 [2024-07-12 11:28:53.572326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.698 qpair failed and we were unable to recover it. 00:24:27.698 [2024-07-12 11:28:53.572567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.698 [2024-07-12 11:28:53.572633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.698 qpair failed and we were unable to recover it. 00:24:27.698 [2024-07-12 11:28:53.572884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.698 [2024-07-12 11:28:53.572954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.698 qpair failed and we were unable to recover it. 00:24:27.698 [2024-07-12 11:28:53.573160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.698 [2024-07-12 11:28:53.573225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.698 qpair failed and we were unable to recover it. 00:24:27.698 [2024-07-12 11:28:53.573483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.698 [2024-07-12 11:28:53.573548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.698 qpair failed and we were unable to recover it. 00:24:27.698 [2024-07-12 11:28:53.573811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.698 [2024-07-12 11:28:53.573897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.698 qpair failed and we were unable to recover it. 00:24:27.698 [2024-07-12 11:28:53.574167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.698 [2024-07-12 11:28:53.574232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.698 qpair failed and we were unable to recover it. 00:24:27.698 [2024-07-12 11:28:53.574478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.698 [2024-07-12 11:28:53.574543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.698 qpair failed and we were unable to recover it. 00:24:27.698 [2024-07-12 11:28:53.574798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.698 [2024-07-12 11:28:53.574863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.698 qpair failed and we were unable to recover it. 00:24:27.698 [2024-07-12 11:28:53.575140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.698 [2024-07-12 11:28:53.575208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.698 qpair failed and we were unable to recover it. 00:24:27.698 [2024-07-12 11:28:53.575485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.698 [2024-07-12 11:28:53.575551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.698 qpair failed and we were unable to recover it. 00:24:27.698 [2024-07-12 11:28:53.575804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.698 [2024-07-12 11:28:53.575910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.698 qpair failed and we were unable to recover it. 00:24:27.698 [2024-07-12 11:28:53.576140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.698 [2024-07-12 11:28:53.576208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.698 qpair failed and we were unable to recover it. 00:24:27.698 [2024-07-12 11:28:53.576415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.698 [2024-07-12 11:28:53.576482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.698 qpair failed and we were unable to recover it. 00:24:27.698 [2024-07-12 11:28:53.576745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.698 [2024-07-12 11:28:53.576812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.698 qpair failed and we were unable to recover it. 00:24:27.698 [2024-07-12 11:28:53.577086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.698 [2024-07-12 11:28:53.577155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.698 qpair failed and we were unable to recover it. 00:24:27.698 [2024-07-12 11:28:53.577385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.698 [2024-07-12 11:28:53.577453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.698 qpair failed and we were unable to recover it. 00:24:27.698 [2024-07-12 11:28:53.577753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.698 [2024-07-12 11:28:53.577790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.698 qpair failed and we were unable to recover it. 00:24:27.698 [2024-07-12 11:28:53.577971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.698 [2024-07-12 11:28:53.578009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.698 qpair failed and we were unable to recover it. 00:24:27.698 [2024-07-12 11:28:53.578249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.698 [2024-07-12 11:28:53.578316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.698 qpair failed and we were unable to recover it. 00:24:27.698 [2024-07-12 11:28:53.578578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.698 [2024-07-12 11:28:53.578644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.698 qpair failed and we were unable to recover it. 00:24:27.698 [2024-07-12 11:28:53.578893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.698 [2024-07-12 11:28:53.578960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.698 qpair failed and we were unable to recover it. 00:24:27.698 [2024-07-12 11:28:53.579269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.698 [2024-07-12 11:28:53.579334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.698 qpair failed and we were unable to recover it. 00:24:27.698 [2024-07-12 11:28:53.579633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.698 [2024-07-12 11:28:53.579708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.698 qpair failed and we were unable to recover it. 00:24:27.698 [2024-07-12 11:28:53.579972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.698 [2024-07-12 11:28:53.580042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.698 qpair failed and we were unable to recover it. 00:24:27.698 [2024-07-12 11:28:53.580269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.698 [2024-07-12 11:28:53.580335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.698 qpair failed and we were unable to recover it. 00:24:27.698 [2024-07-12 11:28:53.580554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.698 [2024-07-12 11:28:53.580620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.698 qpair failed and we were unable to recover it. 00:24:27.698 [2024-07-12 11:28:53.580883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.698 [2024-07-12 11:28:53.580952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.698 qpair failed and we were unable to recover it. 00:24:27.698 [2024-07-12 11:28:53.581199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.698 [2024-07-12 11:28:53.581266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.699 qpair failed and we were unable to recover it. 00:24:27.699 [2024-07-12 11:28:53.581569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.699 [2024-07-12 11:28:53.581636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.699 qpair failed and we were unable to recover it. 00:24:27.699 [2024-07-12 11:28:53.581838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.699 [2024-07-12 11:28:53.581926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.699 qpair failed and we were unable to recover it. 00:24:27.699 [2024-07-12 11:28:53.582186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.699 [2024-07-12 11:28:53.582254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.699 qpair failed and we were unable to recover it. 00:24:27.699 [2024-07-12 11:28:53.582553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.699 [2024-07-12 11:28:53.582589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.699 qpair failed and we were unable to recover it. 00:24:27.699 [2024-07-12 11:28:53.582744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.699 [2024-07-12 11:28:53.582781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.699 qpair failed and we were unable to recover it. 00:24:27.699 [2024-07-12 11:28:53.582957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.699 [2024-07-12 11:28:53.583025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.699 qpair failed and we were unable to recover it. 00:24:27.699 [2024-07-12 11:28:53.583313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.699 [2024-07-12 11:28:53.583379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.699 qpair failed and we were unable to recover it. 00:24:27.699 [2024-07-12 11:28:53.583629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.699 [2024-07-12 11:28:53.583697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.699 qpair failed and we were unable to recover it. 00:24:27.699 [2024-07-12 11:28:53.583974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.699 [2024-07-12 11:28:53.584043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.699 qpair failed and we were unable to recover it. 00:24:27.699 [2024-07-12 11:28:53.584316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.699 [2024-07-12 11:28:53.584352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.699 qpair failed and we were unable to recover it. 00:24:27.699 [2024-07-12 11:28:53.584468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.699 [2024-07-12 11:28:53.584502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.699 qpair failed and we were unable to recover it. 00:24:27.699 [2024-07-12 11:28:53.584746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.699 [2024-07-12 11:28:53.584815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.699 qpair failed and we were unable to recover it. 00:24:27.699 [2024-07-12 11:28:53.585130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.699 [2024-07-12 11:28:53.585196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.699 qpair failed and we were unable to recover it. 00:24:27.699 [2024-07-12 11:28:53.585397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.699 [2024-07-12 11:28:53.585461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.699 qpair failed and we were unable to recover it. 00:24:27.699 [2024-07-12 11:28:53.585717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.699 [2024-07-12 11:28:53.585781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.699 qpair failed and we were unable to recover it. 00:24:27.699 [2024-07-12 11:28:53.586045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.699 [2024-07-12 11:28:53.586113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.699 qpair failed and we were unable to recover it. 00:24:27.699 [2024-07-12 11:28:53.586333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.699 [2024-07-12 11:28:53.586401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.699 qpair failed and we were unable to recover it. 00:24:27.699 [2024-07-12 11:28:53.586639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.699 [2024-07-12 11:28:53.586706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.699 qpair failed and we were unable to recover it. 00:24:27.699 [2024-07-12 11:28:53.586980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.699 [2024-07-12 11:28:53.587018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.699 qpair failed and we were unable to recover it. 00:24:27.699 [2024-07-12 11:28:53.587147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.699 [2024-07-12 11:28:53.587202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.699 qpair failed and we were unable to recover it. 00:24:27.699 [2024-07-12 11:28:53.587471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.699 [2024-07-12 11:28:53.587536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.699 qpair failed and we were unable to recover it. 00:24:27.699 [2024-07-12 11:28:53.587756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.699 [2024-07-12 11:28:53.587821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.699 qpair failed and we were unable to recover it. 00:24:27.699 [2024-07-12 11:28:53.588132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.699 [2024-07-12 11:28:53.588199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.699 qpair failed and we were unable to recover it. 00:24:27.699 [2024-07-12 11:28:53.588505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.699 [2024-07-12 11:28:53.588571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.699 qpair failed and we were unable to recover it. 00:24:27.699 [2024-07-12 11:28:53.588787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.699 [2024-07-12 11:28:53.588852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.699 qpair failed and we were unable to recover it. 00:24:27.699 [2024-07-12 11:28:53.589141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.699 [2024-07-12 11:28:53.589178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.699 qpair failed and we were unable to recover it. 00:24:27.699 [2024-07-12 11:28:53.589332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.699 [2024-07-12 11:28:53.589368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.699 qpair failed and we were unable to recover it. 00:24:27.699 [2024-07-12 11:28:53.589524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.699 [2024-07-12 11:28:53.589590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.699 qpair failed and we were unable to recover it. 00:24:27.699 [2024-07-12 11:28:53.589843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.699 [2024-07-12 11:28:53.589923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.699 qpair failed and we were unable to recover it. 00:24:27.699 [2024-07-12 11:28:53.590128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.699 [2024-07-12 11:28:53.590194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.699 qpair failed and we were unable to recover it. 00:24:27.699 [2024-07-12 11:28:53.590456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.699 [2024-07-12 11:28:53.590494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.699 qpair failed and we were unable to recover it. 00:24:27.699 [2024-07-12 11:28:53.590626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.699 [2024-07-12 11:28:53.590660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.699 qpair failed and we were unable to recover it. 00:24:27.699 [2024-07-12 11:28:53.590783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.699 [2024-07-12 11:28:53.590825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.699 qpair failed and we were unable to recover it. 00:24:27.699 [2024-07-12 11:28:53.591101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.699 [2024-07-12 11:28:53.591169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.699 qpair failed and we were unable to recover it. 00:24:27.699 [2024-07-12 11:28:53.591425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.699 [2024-07-12 11:28:53.591499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.699 qpair failed and we were unable to recover it. 00:24:27.699 [2024-07-12 11:28:53.591782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.699 [2024-07-12 11:28:53.591848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.699 qpair failed and we were unable to recover it. 00:24:27.699 [2024-07-12 11:28:53.592162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.699 [2024-07-12 11:28:53.592199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.699 qpair failed and we were unable to recover it. 00:24:27.699 [2024-07-12 11:28:53.592359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.699 [2024-07-12 11:28:53.592396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.699 qpair failed and we were unable to recover it. 00:24:27.699 [2024-07-12 11:28:53.592651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.699 [2024-07-12 11:28:53.592716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.699 qpair failed and we were unable to recover it. 00:24:27.699 [2024-07-12 11:28:53.592980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.699 [2024-07-12 11:28:53.593047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.699 qpair failed and we were unable to recover it. 00:24:27.699 [2024-07-12 11:28:53.593304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.700 [2024-07-12 11:28:53.593372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.700 qpair failed and we were unable to recover it. 00:24:27.700 [2024-07-12 11:28:53.593629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.700 [2024-07-12 11:28:53.593695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.700 qpair failed and we were unable to recover it. 00:24:27.700 [2024-07-12 11:28:53.593947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.700 [2024-07-12 11:28:53.594016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.700 qpair failed and we were unable to recover it. 00:24:27.700 [2024-07-12 11:28:53.594276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.700 [2024-07-12 11:28:53.594341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.700 qpair failed and we were unable to recover it. 00:24:27.700 [2024-07-12 11:28:53.594626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.700 [2024-07-12 11:28:53.594692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.700 qpair failed and we were unable to recover it. 00:24:27.700 [2024-07-12 11:28:53.594940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.700 [2024-07-12 11:28:53.594978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.700 qpair failed and we were unable to recover it. 00:24:27.700 [2024-07-12 11:28:53.595096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.700 [2024-07-12 11:28:53.595131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.700 qpair failed and we were unable to recover it. 00:24:27.700 [2024-07-12 11:28:53.595381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.700 [2024-07-12 11:28:53.595447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.700 qpair failed and we were unable to recover it. 00:24:27.700 [2024-07-12 11:28:53.595701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.700 [2024-07-12 11:28:53.595768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.700 qpair failed and we were unable to recover it. 00:24:27.700 [2024-07-12 11:28:53.596010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.700 [2024-07-12 11:28:53.596079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.700 qpair failed and we were unable to recover it. 00:24:27.700 [2024-07-12 11:28:53.596334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.700 [2024-07-12 11:28:53.596401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.700 qpair failed and we were unable to recover it. 00:24:27.700 [2024-07-12 11:28:53.596650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.700 [2024-07-12 11:28:53.596718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.700 qpair failed and we were unable to recover it. 00:24:27.700 [2024-07-12 11:28:53.596925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.700 [2024-07-12 11:28:53.596994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.700 qpair failed and we were unable to recover it. 00:24:27.700 [2024-07-12 11:28:53.597249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.700 [2024-07-12 11:28:53.597314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.700 qpair failed and we were unable to recover it. 00:24:27.700 [2024-07-12 11:28:53.597587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.700 [2024-07-12 11:28:53.597652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.700 qpair failed and we were unable to recover it. 00:24:27.700 [2024-07-12 11:28:53.597954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.700 [2024-07-12 11:28:53.597992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.700 qpair failed and we were unable to recover it. 00:24:27.700 [2024-07-12 11:28:53.598118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.700 [2024-07-12 11:28:53.598154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.700 qpair failed and we were unable to recover it. 00:24:27.700 [2024-07-12 11:28:53.598431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.700 [2024-07-12 11:28:53.598498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.700 qpair failed and we were unable to recover it. 00:24:27.700 [2024-07-12 11:28:53.598757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.700 [2024-07-12 11:28:53.598823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.700 qpair failed and we were unable to recover it. 00:24:27.700 [2024-07-12 11:28:53.599099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.700 [2024-07-12 11:28:53.599167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.700 qpair failed and we were unable to recover it. 00:24:27.700 [2024-07-12 11:28:53.599425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.700 [2024-07-12 11:28:53.599490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.700 qpair failed and we were unable to recover it. 00:24:27.700 [2024-07-12 11:28:53.599765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.700 [2024-07-12 11:28:53.599830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.700 qpair failed and we were unable to recover it. 00:24:27.700 [2024-07-12 11:28:53.600154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.700 [2024-07-12 11:28:53.600219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.700 qpair failed and we were unable to recover it. 00:24:27.700 [2024-07-12 11:28:53.600500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.700 [2024-07-12 11:28:53.600566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.700 qpair failed and we were unable to recover it. 00:24:27.700 [2024-07-12 11:28:53.600825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.700 [2024-07-12 11:28:53.600911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.700 qpair failed and we were unable to recover it. 00:24:27.700 [2024-07-12 11:28:53.601143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.700 [2024-07-12 11:28:53.601208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.700 qpair failed and we were unable to recover it. 00:24:27.700 [2024-07-12 11:28:53.601500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.700 [2024-07-12 11:28:53.601568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.700 qpair failed and we were unable to recover it. 00:24:27.700 [2024-07-12 11:28:53.601884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.700 [2024-07-12 11:28:53.601951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.700 qpair failed and we were unable to recover it. 00:24:27.700 [2024-07-12 11:28:53.602212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.700 [2024-07-12 11:28:53.602278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.700 qpair failed and we were unable to recover it. 00:24:27.700 [2024-07-12 11:28:53.602532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.700 [2024-07-12 11:28:53.602598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.700 qpair failed and we were unable to recover it. 00:24:27.700 [2024-07-12 11:28:53.602846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.700 [2024-07-12 11:28:53.602932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.700 qpair failed and we were unable to recover it. 00:24:27.700 [2024-07-12 11:28:53.603233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.700 [2024-07-12 11:28:53.603269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.700 qpair failed and we were unable to recover it. 00:24:27.700 [2024-07-12 11:28:53.603391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.700 [2024-07-12 11:28:53.603426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.700 qpair failed and we were unable to recover it. 00:24:27.700 [2024-07-12 11:28:53.603715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.700 [2024-07-12 11:28:53.603781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.700 qpair failed and we were unable to recover it. 00:24:27.700 [2024-07-12 11:28:53.604088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.700 [2024-07-12 11:28:53.604155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.700 qpair failed and we were unable to recover it. 00:24:27.700 [2024-07-12 11:28:53.604425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.700 [2024-07-12 11:28:53.604490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.700 qpair failed and we were unable to recover it. 00:24:27.700 [2024-07-12 11:28:53.604742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.700 [2024-07-12 11:28:53.604807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.700 qpair failed and we were unable to recover it. 00:24:27.700 [2024-07-12 11:28:53.605056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.700 [2024-07-12 11:28:53.605123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.700 qpair failed and we were unable to recover it. 00:24:27.700 [2024-07-12 11:28:53.605378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.700 [2024-07-12 11:28:53.605442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.700 qpair failed and we were unable to recover it. 00:24:27.700 [2024-07-12 11:28:53.605734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.700 [2024-07-12 11:28:53.605799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.700 qpair failed and we were unable to recover it. 00:24:27.700 [2024-07-12 11:28:53.606045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.700 [2024-07-12 11:28:53.606113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.700 qpair failed and we were unable to recover it. 00:24:27.700 [2024-07-12 11:28:53.606407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.701 [2024-07-12 11:28:53.606443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.701 qpair failed and we were unable to recover it. 00:24:27.701 [2024-07-12 11:28:53.606587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.701 [2024-07-12 11:28:53.606624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.701 qpair failed and we were unable to recover it. 00:24:27.701 [2024-07-12 11:28:53.606777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.701 [2024-07-12 11:28:53.606846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.701 qpair failed and we were unable to recover it. 00:24:27.701 [2024-07-12 11:28:53.607121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.701 [2024-07-12 11:28:53.607186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.701 qpair failed and we were unable to recover it. 00:24:27.701 [2024-07-12 11:28:53.607392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.701 [2024-07-12 11:28:53.607458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.701 qpair failed and we were unable to recover it. 00:24:27.701 [2024-07-12 11:28:53.607707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.701 [2024-07-12 11:28:53.607772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.701 qpair failed and we were unable to recover it. 00:24:27.701 [2024-07-12 11:28:53.608074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.701 [2024-07-12 11:28:53.608141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.701 qpair failed and we were unable to recover it. 00:24:27.701 [2024-07-12 11:28:53.608428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.701 [2024-07-12 11:28:53.608465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.701 qpair failed and we were unable to recover it. 00:24:27.701 [2024-07-12 11:28:53.608581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.701 [2024-07-12 11:28:53.608616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.701 qpair failed and we were unable to recover it. 00:24:27.701 [2024-07-12 11:28:53.608765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.701 [2024-07-12 11:28:53.608801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.701 qpair failed and we were unable to recover it. 00:24:27.701 [2024-07-12 11:28:53.609043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.701 [2024-07-12 11:28:53.609110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.701 qpair failed and we were unable to recover it. 00:24:27.701 [2024-07-12 11:28:53.609323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.701 [2024-07-12 11:28:53.609388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.701 qpair failed and we were unable to recover it. 00:24:27.701 [2024-07-12 11:28:53.609643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.701 [2024-07-12 11:28:53.609708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.701 qpair failed and we were unable to recover it. 00:24:27.701 [2024-07-12 11:28:53.609974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.701 [2024-07-12 11:28:53.610013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.701 qpair failed and we were unable to recover it. 00:24:27.701 [2024-07-12 11:28:53.610162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.701 [2024-07-12 11:28:53.610200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.701 qpair failed and we were unable to recover it. 00:24:27.701 [2024-07-12 11:28:53.610416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.701 [2024-07-12 11:28:53.610481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.701 qpair failed and we were unable to recover it. 00:24:27.701 [2024-07-12 11:28:53.610687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.701 [2024-07-12 11:28:53.610755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.701 qpair failed and we were unable to recover it. 00:24:27.701 [2024-07-12 11:28:53.611012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.701 [2024-07-12 11:28:53.611050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.701 qpair failed and we were unable to recover it. 00:24:27.701 [2024-07-12 11:28:53.611199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.701 [2024-07-12 11:28:53.611235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.701 qpair failed and we were unable to recover it. 00:24:27.701 [2024-07-12 11:28:53.611572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.701 [2024-07-12 11:28:53.611608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.701 qpair failed and we were unable to recover it. 00:24:27.701 [2024-07-12 11:28:53.611747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.701 [2024-07-12 11:28:53.611787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.701 qpair failed and we were unable to recover it. 00:24:27.701 [2024-07-12 11:28:53.611912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.701 [2024-07-12 11:28:53.611948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.701 qpair failed and we were unable to recover it. 00:24:27.701 [2024-07-12 11:28:53.612225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.701 [2024-07-12 11:28:53.612290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.701 qpair failed and we were unable to recover it. 00:24:27.701 [2024-07-12 11:28:53.612588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.701 [2024-07-12 11:28:53.612653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.701 qpair failed and we were unable to recover it. 00:24:27.701 [2024-07-12 11:28:53.612959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.701 [2024-07-12 11:28:53.613026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.701 qpair failed and we were unable to recover it. 00:24:27.701 [2024-07-12 11:28:53.613324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.701 [2024-07-12 11:28:53.613388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.701 qpair failed and we were unable to recover it. 00:24:27.701 [2024-07-12 11:28:53.613615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.701 [2024-07-12 11:28:53.613680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.701 qpair failed and we were unable to recover it. 00:24:27.701 [2024-07-12 11:28:53.613971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.701 [2024-07-12 11:28:53.614038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.701 qpair failed and we were unable to recover it. 00:24:27.701 [2024-07-12 11:28:53.614308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.701 [2024-07-12 11:28:53.614374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.701 qpair failed and we were unable to recover it. 00:24:27.701 [2024-07-12 11:28:53.614654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.701 [2024-07-12 11:28:53.614719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.701 qpair failed and we were unable to recover it. 00:24:27.701 [2024-07-12 11:28:53.615012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.701 [2024-07-12 11:28:53.615079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.701 qpair failed and we were unable to recover it. 00:24:27.701 [2024-07-12 11:28:53.615335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.701 [2024-07-12 11:28:53.615399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.701 qpair failed and we were unable to recover it. 00:24:27.701 [2024-07-12 11:28:53.615688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.701 [2024-07-12 11:28:53.615754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.701 qpair failed and we were unable to recover it. 00:24:27.701 [2024-07-12 11:28:53.616009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.701 [2024-07-12 11:28:53.616076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.701 qpair failed and we were unable to recover it. 00:24:27.701 [2024-07-12 11:28:53.616332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.701 [2024-07-12 11:28:53.616398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.701 qpair failed and we were unable to recover it. 00:24:27.701 [2024-07-12 11:28:53.616655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.701 [2024-07-12 11:28:53.616720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.701 qpair failed and we were unable to recover it. 00:24:27.701 [2024-07-12 11:28:53.616981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.701 [2024-07-12 11:28:53.617048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.701 qpair failed and we were unable to recover it. 00:24:27.701 [2024-07-12 11:28:53.617283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.701 [2024-07-12 11:28:53.617349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.702 qpair failed and we were unable to recover it. 00:24:27.702 [2024-07-12 11:28:53.617574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.702 [2024-07-12 11:28:53.617641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.702 qpair failed and we were unable to recover it. 00:24:27.702 [2024-07-12 11:28:53.617930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.702 [2024-07-12 11:28:53.617999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.702 qpair failed and we were unable to recover it. 00:24:27.702 [2024-07-12 11:28:53.618297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.702 [2024-07-12 11:28:53.618363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.702 qpair failed and we were unable to recover it. 00:24:27.702 [2024-07-12 11:28:53.618631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.702 [2024-07-12 11:28:53.618696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.702 qpair failed and we were unable to recover it. 00:24:27.702 [2024-07-12 11:28:53.618995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.702 [2024-07-12 11:28:53.619063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.702 qpair failed and we were unable to recover it. 00:24:27.702 [2024-07-12 11:28:53.619270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.702 [2024-07-12 11:28:53.619337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.702 qpair failed and we were unable to recover it. 00:24:27.702 [2024-07-12 11:28:53.619588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.702 [2024-07-12 11:28:53.619656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.702 qpair failed and we were unable to recover it. 00:24:27.702 [2024-07-12 11:28:53.619923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.702 [2024-07-12 11:28:53.619993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.702 qpair failed and we were unable to recover it. 00:24:27.702 [2024-07-12 11:28:53.620246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.702 [2024-07-12 11:28:53.620315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.702 qpair failed and we were unable to recover it. 00:24:27.702 [2024-07-12 11:28:53.620622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.702 [2024-07-12 11:28:53.620688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.702 qpair failed and we were unable to recover it. 00:24:27.702 [2024-07-12 11:28:53.620984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.702 [2024-07-12 11:28:53.621051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.702 qpair failed and we were unable to recover it. 00:24:27.702 [2024-07-12 11:28:53.621252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.702 [2024-07-12 11:28:53.621320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.702 qpair failed and we were unable to recover it. 00:24:27.702 [2024-07-12 11:28:53.621626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.702 [2024-07-12 11:28:53.621692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.702 qpair failed and we were unable to recover it. 00:24:27.702 [2024-07-12 11:28:53.621938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.702 [2024-07-12 11:28:53.622007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.702 qpair failed and we were unable to recover it. 00:24:27.702 [2024-07-12 11:28:53.622256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.702 [2024-07-12 11:28:53.622322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.702 qpair failed and we were unable to recover it. 00:24:27.702 [2024-07-12 11:28:53.622575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.702 [2024-07-12 11:28:53.622640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.702 qpair failed and we were unable to recover it. 00:24:27.702 [2024-07-12 11:28:53.622917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.702 [2024-07-12 11:28:53.622985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.702 qpair failed and we were unable to recover it. 00:24:27.702 [2024-07-12 11:28:53.623231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.702 [2024-07-12 11:28:53.623298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.702 qpair failed and we were unable to recover it. 00:24:27.702 [2024-07-12 11:28:53.623560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.702 [2024-07-12 11:28:53.623625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.702 qpair failed and we were unable to recover it. 00:24:27.702 [2024-07-12 11:28:53.623927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.702 [2024-07-12 11:28:53.623995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.702 qpair failed and we were unable to recover it. 00:24:27.702 [2024-07-12 11:28:53.624245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.702 [2024-07-12 11:28:53.624311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.702 qpair failed and we were unable to recover it. 00:24:27.702 [2024-07-12 11:28:53.624497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.702 [2024-07-12 11:28:53.624564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.702 qpair failed and we were unable to recover it. 00:24:27.702 [2024-07-12 11:28:53.624815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.702 [2024-07-12 11:28:53.624921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.702 qpair failed and we were unable to recover it. 00:24:27.702 [2024-07-12 11:28:53.625240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.702 [2024-07-12 11:28:53.625307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.702 qpair failed and we were unable to recover it. 00:24:27.702 [2024-07-12 11:28:53.625570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.702 [2024-07-12 11:28:53.625635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.702 qpair failed and we were unable to recover it. 00:24:27.702 [2024-07-12 11:28:53.625920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.702 [2024-07-12 11:28:53.625989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.702 qpair failed and we were unable to recover it. 00:24:27.702 [2024-07-12 11:28:53.626243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.702 [2024-07-12 11:28:53.626308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.702 qpair failed and we were unable to recover it. 00:24:27.702 [2024-07-12 11:28:53.626558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.702 [2024-07-12 11:28:53.626623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.702 qpair failed and we were unable to recover it. 00:24:27.702 [2024-07-12 11:28:53.626852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.702 [2024-07-12 11:28:53.626934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.702 qpair failed and we were unable to recover it. 00:24:27.702 [2024-07-12 11:28:53.627199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.702 [2024-07-12 11:28:53.627263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.702 qpair failed and we were unable to recover it. 00:24:27.702 [2024-07-12 11:28:53.627522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.702 [2024-07-12 11:28:53.627588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.702 qpair failed and we were unable to recover it. 00:24:27.702 [2024-07-12 11:28:53.627850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.702 [2024-07-12 11:28:53.627929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.702 qpair failed and we were unable to recover it. 00:24:27.702 [2024-07-12 11:28:53.628252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.702 [2024-07-12 11:28:53.628317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.702 qpair failed and we were unable to recover it. 00:24:27.702 [2024-07-12 11:28:53.628546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.702 [2024-07-12 11:28:53.628613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.702 qpair failed and we were unable to recover it. 00:24:27.702 [2024-07-12 11:28:53.628858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.702 [2024-07-12 11:28:53.628939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.702 qpair failed and we were unable to recover it. 00:24:27.702 [2024-07-12 11:28:53.629193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.702 [2024-07-12 11:28:53.629258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.702 qpair failed and we were unable to recover it. 00:24:27.702 [2024-07-12 11:28:53.629577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.702 [2024-07-12 11:28:53.629643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.702 qpair failed and we were unable to recover it. 00:24:27.702 [2024-07-12 11:28:53.629845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.702 [2024-07-12 11:28:53.629933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.702 qpair failed and we were unable to recover it. 00:24:27.702 [2024-07-12 11:28:53.630172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-12 11:28:53.630236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-12 11:28:53.630433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-12 11:28:53.630499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-12 11:28:53.630756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-12 11:28:53.630821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-12 11:28:53.631146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-12 11:28:53.631211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-12 11:28:53.631463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-12 11:28:53.631530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-12 11:28:53.631789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-12 11:28:53.631853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-12 11:28:53.632136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-12 11:28:53.632202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-12 11:28:53.632497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-12 11:28:53.632563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-12 11:28:53.632821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-12 11:28:53.632906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-12 11:28:53.633202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-12 11:28:53.633266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-12 11:28:53.633553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-12 11:28:53.633618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-12 11:28:53.633896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-12 11:28:53.633964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-12 11:28:53.634180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-12 11:28:53.634247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-12 11:28:53.634500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-12 11:28:53.634567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-12 11:28:53.634892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-12 11:28:53.634961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-12 11:28:53.635226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-12 11:28:53.635290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-12 11:28:53.635534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-12 11:28:53.635599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-12 11:28:53.635900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-12 11:28:53.635969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-12 11:28:53.636280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-12 11:28:53.636346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-12 11:28:53.636608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-12 11:28:53.636673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-12 11:28:53.636937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-12 11:28:53.637005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-12 11:28:53.637226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-12 11:28:53.637293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-12 11:28:53.637554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-12 11:28:53.637619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-12 11:28:53.637803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-12 11:28:53.637885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-12 11:28:53.638154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-12 11:28:53.638230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-12 11:28:53.638489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-12 11:28:53.638555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-12 11:28:53.638750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-12 11:28:53.638815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-12 11:28:53.639135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-12 11:28:53.639200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-12 11:28:53.639465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-12 11:28:53.639533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-12 11:28:53.639782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-12 11:28:53.639848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-12 11:28:53.640102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-12 11:28:53.640166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-12 11:28:53.640379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-12 11:28:53.640444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-12 11:28:53.640695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-12 11:28:53.640764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-12 11:28:53.641069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-12 11:28:53.641135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-12 11:28:53.641400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-12 11:28:53.641466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-12 11:28:53.641712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-12 11:28:53.641781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-12 11:28:53.642062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-12 11:28:53.642129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-12 11:28:53.642417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-12 11:28:53.642484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-12 11:28:53.642788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-12 11:28:53.642853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-12 11:28:53.643128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-12 11:28:53.643193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.703 [2024-07-12 11:28:53.643430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.703 [2024-07-12 11:28:53.643492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.703 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-12 11:28:53.643683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-12 11:28:53.643751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-12 11:28:53.644050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-12 11:28:53.644118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-12 11:28:53.644380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-12 11:28:53.644446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-12 11:28:53.644718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-12 11:28:53.644783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-12 11:28:53.645056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-12 11:28:53.645125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-12 11:28:53.645374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-12 11:28:53.645442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-12 11:28:53.645697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-12 11:28:53.645763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-12 11:28:53.646037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-12 11:28:53.646106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-12 11:28:53.646407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-12 11:28:53.646472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-12 11:28:53.646696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-12 11:28:53.646764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-12 11:28:53.647008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-12 11:28:53.647078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-12 11:28:53.647326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-12 11:28:53.647394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-12 11:28:53.647675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-12 11:28:53.647741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-12 11:28:53.647966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-12 11:28:53.648036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-12 11:28:53.648270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-12 11:28:53.648336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-12 11:28:53.648606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-12 11:28:53.648671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-12 11:28:53.648896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-12 11:28:53.648963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-12 11:28:53.649262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-12 11:28:53.649330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-12 11:28:53.649542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-12 11:28:53.649607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-12 11:28:53.649805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-12 11:28:53.649889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-12 11:28:53.650143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-12 11:28:53.650210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-12 11:28:53.650437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-12 11:28:53.650502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-12 11:28:53.650752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-12 11:28:53.650820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-12 11:28:53.651098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-12 11:28:53.651180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-12 11:28:53.651453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-12 11:28:53.651519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-12 11:28:53.651759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-12 11:28:53.651825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-12 11:28:53.652128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-12 11:28:53.652196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-12 11:28:53.652490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-12 11:28:53.652556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-12 11:28:53.652824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-12 11:28:53.652912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-12 11:28:53.653149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-12 11:28:53.653216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-12 11:28:53.653470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-12 11:28:53.653535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-12 11:28:53.653779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-12 11:28:53.653844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-12 11:28:53.654121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-12 11:28:53.654186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-12 11:28:53.654431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-12 11:28:53.654494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-12 11:28:53.654787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-12 11:28:53.654852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-12 11:28:53.655091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-12 11:28:53.655159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-12 11:28:53.655365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-12 11:28:53.655429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-12 11:28:53.655733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-12 11:28:53.655801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-12 11:28:53.656084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-12 11:28:53.656151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-12 11:28:53.656375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-12 11:28:53.656437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.704 qpair failed and we were unable to recover it. 00:24:27.704 [2024-07-12 11:28:53.656680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.704 [2024-07-12 11:28:53.656745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-12 11:28:53.656986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-12 11:28:53.657054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-12 11:28:53.657351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-12 11:28:53.657416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-12 11:28:53.657642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-12 11:28:53.657707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-12 11:28:53.657953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-12 11:28:53.658020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-12 11:28:53.658266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-12 11:28:53.658333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-12 11:28:53.658580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-12 11:28:53.658647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-12 11:28:53.658953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-12 11:28:53.659021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-12 11:28:53.659284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-12 11:28:53.659350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-12 11:28:53.659658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-12 11:28:53.659723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-12 11:28:53.660029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-12 11:28:53.660099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-12 11:28:53.660406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-12 11:28:53.660471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-12 11:28:53.660734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-12 11:28:53.660800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-12 11:28:53.661088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-12 11:28:53.661155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-12 11:28:53.661415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-12 11:28:53.661479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-12 11:28:53.661777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-12 11:28:53.661843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-12 11:28:53.662083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-12 11:28:53.662149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-12 11:28:53.662444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-12 11:28:53.662509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-12 11:28:53.662768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-12 11:28:53.662833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-12 11:28:53.663081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-12 11:28:53.663147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-12 11:28:53.663445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-12 11:28:53.663510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-12 11:28:53.663809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-12 11:28:53.663904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-12 11:28:53.664146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-12 11:28:53.664213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-12 11:28:53.664503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-12 11:28:53.664578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-12 11:28:53.664884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-12 11:28:53.664952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-12 11:28:53.665266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-12 11:28:53.665331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-12 11:28:53.665619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-12 11:28:53.665685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-12 11:28:53.665950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-12 11:28:53.666018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-12 11:28:53.666236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-12 11:28:53.666304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-12 11:28:53.666589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-12 11:28:53.666654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-12 11:28:53.666864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-12 11:28:53.666945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-12 11:28:53.667166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-12 11:28:53.667232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-12 11:28:53.667445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-12 11:28:53.667511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-12 11:28:53.667767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-12 11:28:53.667833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-12 11:28:53.668139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-12 11:28:53.668205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-12 11:28:53.668470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-12 11:28:53.668535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-12 11:28:53.668800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-12 11:28:53.668880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-12 11:28:53.669152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-12 11:28:53.669217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-12 11:28:53.669468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-12 11:28:53.669533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-12 11:28:53.669736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-12 11:28:53.669801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.705 [2024-07-12 11:28:53.670064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.705 [2024-07-12 11:28:53.670131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.705 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-12 11:28:53.670343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-12 11:28:53.670405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-12 11:28:53.670662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-12 11:28:53.670727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-12 11:28:53.671034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-12 11:28:53.671100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-12 11:28:53.671358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-12 11:28:53.671423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-12 11:28:53.671685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-12 11:28:53.671750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-12 11:28:53.671979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-12 11:28:53.672049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-12 11:28:53.672341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-12 11:28:53.672405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-12 11:28:53.672634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-12 11:28:53.672698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-12 11:28:53.672942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-12 11:28:53.673011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-12 11:28:53.673279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-12 11:28:53.673344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-12 11:28:53.673558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-12 11:28:53.673624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-12 11:28:53.673915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-12 11:28:53.673983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-12 11:28:53.674240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-12 11:28:53.674305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-12 11:28:53.674560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-12 11:28:53.674625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-12 11:28:53.674900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-12 11:28:53.674967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-12 11:28:53.675193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-12 11:28:53.675258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-12 11:28:53.675507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-12 11:28:53.675575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-12 11:28:53.675846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-12 11:28:53.675946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-12 11:28:53.676198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-12 11:28:53.676264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-12 11:28:53.676532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-12 11:28:53.676598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-12 11:28:53.676902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-12 11:28:53.676971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-12 11:28:53.677273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-12 11:28:53.677339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-12 11:28:53.677631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-12 11:28:53.677708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-12 11:28:53.677966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-12 11:28:53.678032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-12 11:28:53.678286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-12 11:28:53.678354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-12 11:28:53.678552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-12 11:28:53.678617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-12 11:28:53.678861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-12 11:28:53.678940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-12 11:28:53.679164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-12 11:28:53.679232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-12 11:28:53.679492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-12 11:28:53.679558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-12 11:28:53.679783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-12 11:28:53.679849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-12 11:28:53.680141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-12 11:28:53.680207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-12 11:28:53.680425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-12 11:28:53.680490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-12 11:28:53.680743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-12 11:28:53.680809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-12 11:28:53.681076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-12 11:28:53.681143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-12 11:28:53.681452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.706 [2024-07-12 11:28:53.681516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.706 qpair failed and we were unable to recover it. 00:24:27.706 [2024-07-12 11:28:53.681731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-12 11:28:53.681795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-12 11:28:53.682115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-12 11:28:53.682183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-12 11:28:53.682463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-12 11:28:53.682526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-12 11:28:53.682793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-12 11:28:53.682858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-12 11:28:53.683108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-12 11:28:53.683176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-12 11:28:53.683387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-12 11:28:53.683450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-12 11:28:53.683747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-12 11:28:53.683813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-12 11:28:53.684108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-12 11:28:53.684175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-12 11:28:53.684419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-12 11:28:53.684485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-12 11:28:53.684715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-12 11:28:53.684779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-12 11:28:53.685100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-12 11:28:53.685168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-12 11:28:53.685463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-12 11:28:53.685529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-12 11:28:53.685832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-12 11:28:53.685917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-12 11:28:53.686225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-12 11:28:53.686290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-12 11:28:53.686591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-12 11:28:53.686656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-12 11:28:53.686930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-12 11:28:53.687000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-12 11:28:53.687260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-12 11:28:53.687326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-12 11:28:53.687581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-12 11:28:53.687646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-12 11:28:53.687923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-12 11:28:53.687990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-12 11:28:53.688246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-12 11:28:53.688312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-12 11:28:53.688568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-12 11:28:53.688635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-12 11:28:53.688899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-12 11:28:53.688969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-12 11:28:53.689255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-12 11:28:53.689321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-12 11:28:53.689528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-12 11:28:53.689597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-12 11:28:53.689817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-12 11:28:53.689901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-12 11:28:53.690189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-12 11:28:53.690255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-12 11:28:53.690506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-12 11:28:53.690572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-12 11:28:53.690787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-12 11:28:53.690863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-12 11:28:53.691100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-12 11:28:53.691169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-12 11:28:53.691460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-12 11:28:53.691525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-12 11:28:53.691779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-12 11:28:53.691847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-12 11:28:53.692121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-12 11:28:53.692190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-12 11:28:53.692402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-12 11:28:53.692466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-12 11:28:53.692731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-12 11:28:53.692797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-12 11:28:53.693015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-12 11:28:53.693084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-12 11:28:53.693338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-12 11:28:53.693406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-12 11:28:53.693698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-12 11:28:53.693763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-12 11:28:53.694039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-12 11:28:53.694107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-12 11:28:53.694396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-12 11:28:53.694462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-12 11:28:53.694726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-12 11:28:53.694793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.707 [2024-07-12 11:28:53.695116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.707 [2024-07-12 11:28:53.695183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.707 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-12 11:28:53.695432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-12 11:28:53.695497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-12 11:28:53.695781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-12 11:28:53.695846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-12 11:28:53.696128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-12 11:28:53.696194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-12 11:28:53.696439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-12 11:28:53.696506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-12 11:28:53.696804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-12 11:28:53.696888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-12 11:28:53.697089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-12 11:28:53.697155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-12 11:28:53.697412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-12 11:28:53.697478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-12 11:28:53.697776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-12 11:28:53.697841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-12 11:28:53.698161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-12 11:28:53.698228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-12 11:28:53.698482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-12 11:28:53.698547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-12 11:28:53.698806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-12 11:28:53.698890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-12 11:28:53.699161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-12 11:28:53.699226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-12 11:28:53.699545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-12 11:28:53.699612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-12 11:28:53.699891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-12 11:28:53.699960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-12 11:28:53.700224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-12 11:28:53.700288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-12 11:28:53.700497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-12 11:28:53.700564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-12 11:28:53.700794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-12 11:28:53.700860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-12 11:28:53.701144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-12 11:28:53.701209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-12 11:28:53.701510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-12 11:28:53.701577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-12 11:28:53.701836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-12 11:28:53.701925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-12 11:28:53.702156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-12 11:28:53.702222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-12 11:28:53.702515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-12 11:28:53.702580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-12 11:28:53.702802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-12 11:28:53.702894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-12 11:28:53.703194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-12 11:28:53.703261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-12 11:28:53.703523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-12 11:28:53.703589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-12 11:28:53.703848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-12 11:28:53.703937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-12 11:28:53.704216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-12 11:28:53.704292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-12 11:28:53.704549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-12 11:28:53.704614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-12 11:28:53.704884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-12 11:28:53.704953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-12 11:28:53.705213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-12 11:28:53.705278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-12 11:28:53.705567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-12 11:28:53.705632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-12 11:28:53.705949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-12 11:28:53.706016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-12 11:28:53.706272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-12 11:28:53.706338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-12 11:28:53.706596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-12 11:28:53.706661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-12 11:28:53.706895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.708 [2024-07-12 11:28:53.706965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.708 qpair failed and we were unable to recover it. 00:24:27.708 [2024-07-12 11:28:53.707237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-12 11:28:53.707303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-12 11:28:53.707595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-12 11:28:53.707660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-12 11:28:53.707886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-12 11:28:53.707952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-12 11:28:53.708189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-12 11:28:53.708254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-12 11:28:53.708552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-12 11:28:53.708617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-12 11:28:53.708891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-12 11:28:53.708959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-12 11:28:53.709161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-12 11:28:53.709228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-12 11:28:53.709450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-12 11:28:53.709518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-12 11:28:53.709772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-12 11:28:53.709839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-12 11:28:53.710120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-12 11:28:53.710186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-12 11:28:53.710406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-12 11:28:53.710472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-12 11:28:53.710697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-12 11:28:53.710763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-12 11:28:53.711015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-12 11:28:53.711085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-12 11:28:53.711341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-12 11:28:53.711407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-12 11:28:53.711715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-12 11:28:53.711780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-12 11:28:53.712089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-12 11:28:53.712158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-12 11:28:53.712380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-12 11:28:53.712445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-12 11:28:53.712657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-12 11:28:53.712723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-12 11:28:53.712974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-12 11:28:53.713043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-12 11:28:53.713313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-12 11:28:53.713380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-12 11:28:53.713684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-12 11:28:53.713750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-12 11:28:53.714008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-12 11:28:53.714076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-12 11:28:53.714362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-12 11:28:53.714429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-12 11:28:53.714671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-12 11:28:53.714737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-12 11:28:53.714966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-12 11:28:53.715035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-12 11:28:53.715297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-12 11:28:53.715365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-12 11:28:53.715607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-12 11:28:53.715673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-12 11:28:53.715904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-12 11:28:53.715973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-12 11:28:53.716232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-12 11:28:53.716298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-12 11:28:53.716491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-12 11:28:53.716557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-12 11:28:53.716803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-12 11:28:53.716888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-12 11:28:53.717141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-12 11:28:53.717223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-12 11:28:53.717499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-12 11:28:53.717565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-12 11:28:53.717821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-12 11:28:53.717906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-12 11:28:53.718201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-12 11:28:53.718269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-12 11:28:53.718505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-12 11:28:53.718570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-12 11:28:53.718818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-12 11:28:53.718920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-12 11:28:53.719191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-12 11:28:53.719257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-12 11:28:53.719488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-12 11:28:53.719554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.709 [2024-07-12 11:28:53.719765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.709 [2024-07-12 11:28:53.719832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.709 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-12 11:28:53.720077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-12 11:28:53.720145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-12 11:28:53.720374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-12 11:28:53.720440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-12 11:28:53.720679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-12 11:28:53.720744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-12 11:28:53.721010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-12 11:28:53.721080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-12 11:28:53.721342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-12 11:28:53.721407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-12 11:28:53.721711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-12 11:28:53.721777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-12 11:28:53.722044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-12 11:28:53.722112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-12 11:28:53.722322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-12 11:28:53.722389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-12 11:28:53.722645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-12 11:28:53.722711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-12 11:28:53.722963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-12 11:28:53.723032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-12 11:28:53.723337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-12 11:28:53.723403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-12 11:28:53.723660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-12 11:28:53.723725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-12 11:28:53.723975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-12 11:28:53.724044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-12 11:28:53.724301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-12 11:28:53.724369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-12 11:28:53.724587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-12 11:28:53.724653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-12 11:28:53.724861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-12 11:28:53.724949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-12 11:28:53.725197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-12 11:28:53.725266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-12 11:28:53.725558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-12 11:28:53.725624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-12 11:28:53.725903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-12 11:28:53.725971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-12 11:28:53.726196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-12 11:28:53.726261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-12 11:28:53.726547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-12 11:28:53.726611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-12 11:28:53.726915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-12 11:28:53.726983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-12 11:28:53.727227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-12 11:28:53.727294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-12 11:28:53.727541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-12 11:28:53.727606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-12 11:28:53.727800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-12 11:28:53.727881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-12 11:28:53.728144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-12 11:28:53.728209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-12 11:28:53.728464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-12 11:28:53.728531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-12 11:28:53.728822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-12 11:28:53.728903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-12 11:28:53.729169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-12 11:28:53.729233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.710 [2024-07-12 11:28:53.729481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.710 [2024-07-12 11:28:53.729545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.710 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-12 11:28:53.729792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-12 11:28:53.729856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-12 11:28:53.730143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-12 11:28:53.730218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-12 11:28:53.730419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-12 11:28:53.730487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-12 11:28:53.730736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-12 11:28:53.730803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-12 11:28:53.731131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-12 11:28:53.731198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-12 11:28:53.731485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-12 11:28:53.731550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-12 11:28:53.731810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-12 11:28:53.731892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-12 11:28:53.732147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-12 11:28:53.732215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-12 11:28:53.732509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-12 11:28:53.732574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-12 11:28:53.732887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-12 11:28:53.732955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-12 11:28:53.733213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-12 11:28:53.733279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-12 11:28:53.733578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-12 11:28:53.733644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-12 11:28:53.733942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-12 11:28:53.734009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-12 11:28:53.734250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-12 11:28:53.734314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-12 11:28:53.734568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-12 11:28:53.734632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-12 11:28:53.734939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-12 11:28:53.735006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-12 11:28:53.735292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-12 11:28:53.735360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-12 11:28:53.735560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-12 11:28:53.735627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-12 11:28:53.735917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-12 11:28:53.735985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-12 11:28:53.736277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-12 11:28:53.736342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-12 11:28:53.736630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-12 11:28:53.736696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-12 11:28:53.736999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-12 11:28:53.737066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-12 11:28:53.737318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-12 11:28:53.737382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-12 11:28:53.737629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-12 11:28:53.737696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-12 11:28:53.737997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-12 11:28:53.738066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-12 11:28:53.738259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-12 11:28:53.738325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-12 11:28:53.738573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-12 11:28:53.738642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-12 11:28:53.738951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-12 11:28:53.739019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-12 11:28:53.739239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-12 11:28:53.739304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-12 11:28:53.739601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-12 11:28:53.739668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-12 11:28:53.739919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-12 11:28:53.739989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-12 11:28:53.740261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-12 11:28:53.740326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-12 11:28:53.740613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-12 11:28:53.740678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-12 11:28:53.740969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-12 11:28:53.741036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-12 11:28:53.741338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-12 11:28:53.741403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-12 11:28:53.741692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-12 11:28:53.741757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-12 11:28:53.742079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-12 11:28:53.742146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-12 11:28:53.742417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-12 11:28:53.742484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-12 11:28:53.742781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-12 11:28:53.742847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-12 11:28:53.743159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.711 [2024-07-12 11:28:53.743226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.711 qpair failed and we were unable to recover it. 00:24:27.711 [2024-07-12 11:28:53.743475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-12 11:28:53.743541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-12 11:28:53.743843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-12 11:28:53.743941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-12 11:28:53.744202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-12 11:28:53.744268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-12 11:28:53.744556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-12 11:28:53.744624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-12 11:28:53.744944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-12 11:28:53.745013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-12 11:28:53.745291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-12 11:28:53.745328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-12 11:28:53.745486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-12 11:28:53.745522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-12 11:28:53.745651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-12 11:28:53.745688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-12 11:28:53.745834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-12 11:28:53.745879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-12 11:28:53.746068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-12 11:28:53.746104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-12 11:28:53.746261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-12 11:28:53.746298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-12 11:28:53.746445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-12 11:28:53.746483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-12 11:28:53.746662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-12 11:28:53.746699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-12 11:28:53.746824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-12 11:28:53.746858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-12 11:28:53.747008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-12 11:28:53.747045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-12 11:28:53.747203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-12 11:28:53.747240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-12 11:28:53.747387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-12 11:28:53.747423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-12 11:28:53.747566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-12 11:28:53.747603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-12 11:28:53.747727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-12 11:28:53.747762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-12 11:28:53.747879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-12 11:28:53.747916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-12 11:28:53.748044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-12 11:28:53.748079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-12 11:28:53.748204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-12 11:28:53.748240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-12 11:28:53.748350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-12 11:28:53.748386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-12 11:28:53.748502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-12 11:28:53.748538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-12 11:28:53.748651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-12 11:28:53.748687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-12 11:28:53.748835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-12 11:28:53.748884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-12 11:28:53.749007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-12 11:28:53.749041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-12 11:28:53.749154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-12 11:28:53.749189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-12 11:28:53.749354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-12 11:28:53.749409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-12 11:28:53.749570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-12 11:28:53.749608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-12 11:28:53.749788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-12 11:28:53.749824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-12 11:28:53.749978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-12 11:28:53.750015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-12 11:28:53.750197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-12 11:28:53.750233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-12 11:28:53.750381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-12 11:28:53.750420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.712 [2024-07-12 11:28:53.750563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.712 [2024-07-12 11:28:53.750599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.712 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-12 11:28:53.750740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-12 11:28:53.750775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-12 11:28:53.750918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-12 11:28:53.750953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-12 11:28:53.751067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-12 11:28:53.751101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-12 11:28:53.751252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-12 11:28:53.751287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-12 11:28:53.751465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-12 11:28:53.751501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-12 11:28:53.751620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-12 11:28:53.751654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-12 11:28:53.751797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-12 11:28:53.751840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-12 11:28:53.751998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-12 11:28:53.752034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-12 11:28:53.752174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-12 11:28:53.752209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-12 11:28:53.752313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-12 11:28:53.752346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-12 11:28:53.752482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-12 11:28:53.752517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-12 11:28:53.752672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-12 11:28:53.752734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-12 11:28:53.752974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-12 11:28:53.753011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-12 11:28:53.753130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-12 11:28:53.753164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-12 11:28:53.753335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-12 11:28:53.753399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-12 11:28:53.753641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-12 11:28:53.753693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-12 11:28:53.753849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-12 11:28:53.753893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-12 11:28:53.754200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-12 11:28:53.754236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-12 11:28:53.754384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-12 11:28:53.754447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-12 11:28:53.754737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-12 11:28:53.754802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-12 11:28:53.755074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-12 11:28:53.755140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-12 11:28:53.755440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-12 11:28:53.755503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-12 11:28:53.755782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-12 11:28:53.755818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-12 11:28:53.756002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-12 11:28:53.756038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-12 11:28:53.756334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-12 11:28:53.756398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-12 11:28:53.756661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-12 11:28:53.756697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-12 11:28:53.756843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-12 11:28:53.756887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-12 11:28:53.757070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-12 11:28:53.757107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-12 11:28:53.757280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-12 11:28:53.757333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-12 11:28:53.757562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-12 11:28:53.757626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-12 11:28:53.757918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-12 11:28:53.757985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-12 11:28:53.758235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-12 11:28:53.758299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-12 11:28:53.758545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-12 11:28:53.758608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-12 11:28:53.758914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-12 11:28:53.758980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-12 11:28:53.759235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-12 11:28:53.759298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-12 11:28:53.759544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-12 11:28:53.759607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-12 11:28:53.759900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-12 11:28:53.759966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-12 11:28:53.760264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-12 11:28:53.760327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-12 11:28:53.760629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-12 11:28:53.760692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-12 11:28:53.760957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.713 [2024-07-12 11:28:53.761023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.713 qpair failed and we were unable to recover it. 00:24:27.713 [2024-07-12 11:28:53.761319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-12 11:28:53.761382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-12 11:28:53.761672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-12 11:28:53.761736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-12 11:28:53.761989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-12 11:28:53.762055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-12 11:28:53.762298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-12 11:28:53.762361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-12 11:28:53.762621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-12 11:28:53.762684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-12 11:28:53.762977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-12 11:28:53.763042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-12 11:28:53.763250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-12 11:28:53.763331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-12 11:28:53.763579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-12 11:28:53.763646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-12 11:28:53.763947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-12 11:28:53.764013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-12 11:28:53.764302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-12 11:28:53.764366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-12 11:28:53.764597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-12 11:28:53.764661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-12 11:28:53.764953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-12 11:28:53.765017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-12 11:28:53.765261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-12 11:28:53.765324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-12 11:28:53.765572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-12 11:28:53.765638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-12 11:28:53.765909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-12 11:28:53.765976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-12 11:28:53.766213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-12 11:28:53.766278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-12 11:28:53.766540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-12 11:28:53.766603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-12 11:28:53.766901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-12 11:28:53.766968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-12 11:28:53.767235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-12 11:28:53.767299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-12 11:28:53.767584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-12 11:28:53.767647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-12 11:28:53.767956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-12 11:28:53.768022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-12 11:28:53.768249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-12 11:28:53.768313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-12 11:28:53.768518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-12 11:28:53.768583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-12 11:28:53.768894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-12 11:28:53.768960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-12 11:28:53.769215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-12 11:28:53.769283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-12 11:28:53.769588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-12 11:28:53.769652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-12 11:28:53.769938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-12 11:28:53.770002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-12 11:28:53.770200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-12 11:28:53.770267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-12 11:28:53.770525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-12 11:28:53.770590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-12 11:28:53.770895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-12 11:28:53.770962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-12 11:28:53.771264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-12 11:28:53.771328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-12 11:28:53.771629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-12 11:28:53.771693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-12 11:28:53.771959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-12 11:28:53.772026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-12 11:28:53.772242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-12 11:28:53.772318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-12 11:28:53.772617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-12 11:28:53.772680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-12 11:28:53.772955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-12 11:28:53.773022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-12 11:28:53.773323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-12 11:28:53.773388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.714 qpair failed and we were unable to recover it. 00:24:27.714 [2024-07-12 11:28:53.773688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.714 [2024-07-12 11:28:53.773752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-12 11:28:53.774023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-12 11:28:53.774090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-12 11:28:53.774380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-12 11:28:53.774445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-12 11:28:53.774748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-12 11:28:53.774812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-12 11:28:53.775119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-12 11:28:53.775184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-12 11:28:53.775429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-12 11:28:53.775495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-12 11:28:53.775748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-12 11:28:53.775812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-12 11:28:53.776126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-12 11:28:53.776191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-12 11:28:53.776408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-12 11:28:53.776472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-12 11:28:53.776671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-12 11:28:53.776733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-12 11:28:53.777023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-12 11:28:53.777090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-12 11:28:53.777337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-12 11:28:53.777401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-12 11:28:53.777694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-12 11:28:53.777756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-12 11:28:53.778067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-12 11:28:53.778132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-12 11:28:53.778386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-12 11:28:53.778450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-12 11:28:53.778709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-12 11:28:53.778771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-12 11:28:53.779075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-12 11:28:53.779140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-12 11:28:53.779402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-12 11:28:53.779466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-12 11:28:53.779722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-12 11:28:53.779785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-12 11:28:53.780091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-12 11:28:53.780157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-12 11:28:53.780452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-12 11:28:53.780517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-12 11:28:53.780809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-12 11:28:53.780891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-12 11:28:53.781156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-12 11:28:53.781219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-12 11:28:53.781524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-12 11:28:53.781587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-12 11:28:53.781923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-12 11:28:53.781990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-12 11:28:53.782279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-12 11:28:53.782343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-12 11:28:53.782645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-12 11:28:53.782709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-12 11:28:53.782923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-12 11:28:53.782993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-12 11:28:53.783281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-12 11:28:53.783345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-12 11:28:53.783646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-12 11:28:53.783710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-12 11:28:53.784003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-12 11:28:53.784070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-12 11:28:53.784372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-12 11:28:53.784436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-12 11:28:53.784728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-12 11:28:53.784792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-12 11:28:53.785027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.715 [2024-07-12 11:28:53.785095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.715 qpair failed and we were unable to recover it. 00:24:27.715 [2024-07-12 11:28:53.785308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-12 11:28:53.785374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-12 11:28:53.785641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-12 11:28:53.785707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-12 11:28:53.786009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-12 11:28:53.786085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-12 11:28:53.786389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-12 11:28:53.786453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-12 11:28:53.786701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-12 11:28:53.786766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-12 11:28:53.787013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-12 11:28:53.787079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-12 11:28:53.787341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-12 11:28:53.787406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-12 11:28:53.787709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-12 11:28:53.787773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-12 11:28:53.788040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-12 11:28:53.788109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-12 11:28:53.788413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-12 11:28:53.788477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-12 11:28:53.788766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-12 11:28:53.788830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-12 11:28:53.789066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-12 11:28:53.789131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-12 11:28:53.789425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-12 11:28:53.789488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-12 11:28:53.789797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-12 11:28:53.789862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-12 11:28:53.790170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-12 11:28:53.790233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-12 11:28:53.790541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-12 11:28:53.790605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-12 11:28:53.790914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-12 11:28:53.790981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-12 11:28:53.791237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-12 11:28:53.791300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-12 11:28:53.791588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-12 11:28:53.791652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-12 11:28:53.791918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-12 11:28:53.791985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-12 11:28:53.792280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-12 11:28:53.792342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-12 11:28:53.792601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-12 11:28:53.792664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-12 11:28:53.792961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-12 11:28:53.793027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-12 11:28:53.793326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-12 11:28:53.793389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-12 11:28:53.793616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-12 11:28:53.793679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-12 11:28:53.793922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-12 11:28:53.793987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-12 11:28:53.794238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-12 11:28:53.794301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-12 11:28:53.794596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-12 11:28:53.794660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-12 11:28:53.794908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-12 11:28:53.794973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-12 11:28:53.795273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-12 11:28:53.795338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-12 11:28:53.795540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-12 11:28:53.795604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-12 11:28:53.795836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-12 11:28:53.795916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-12 11:28:53.796184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-12 11:28:53.796249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-12 11:28:53.796502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-12 11:28:53.796565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-12 11:28:53.796815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-12 11:28:53.796891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-12 11:28:53.797148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-12 11:28:53.797212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-12 11:28:53.797503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-12 11:28:53.797566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-12 11:28:53.797888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-12 11:28:53.797954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-12 11:28:53.798243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-12 11:28:53.798307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-12 11:28:53.798568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-12 11:28:53.798634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.716 [2024-07-12 11:28:53.798931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.716 [2024-07-12 11:28:53.798998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.716 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-12 11:28:53.799303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-12 11:28:53.799368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-12 11:28:53.799677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-12 11:28:53.799751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-12 11:28:53.800044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-12 11:28:53.800110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-12 11:28:53.800367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-12 11:28:53.800433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-12 11:28:53.800692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-12 11:28:53.800760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-12 11:28:53.801067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-12 11:28:53.801134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-12 11:28:53.801385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-12 11:28:53.801449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-12 11:28:53.801658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-12 11:28:53.801726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-12 11:28:53.801984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-12 11:28:53.802050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-12 11:28:53.802293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-12 11:28:53.802359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-12 11:28:53.802592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-12 11:28:53.802656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-12 11:28:53.802945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-12 11:28:53.803012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-12 11:28:53.803266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-12 11:28:53.803329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-12 11:28:53.803597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-12 11:28:53.803661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-12 11:28:53.803944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-12 11:28:53.804009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-12 11:28:53.804322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-12 11:28:53.804386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-12 11:28:53.804697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-12 11:28:53.804761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-12 11:28:53.805075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-12 11:28:53.805141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-12 11:28:53.805440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-12 11:28:53.805504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-12 11:28:53.805735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-12 11:28:53.805799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-12 11:28:53.806104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-12 11:28:53.806169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-12 11:28:53.806435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-12 11:28:53.806504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-12 11:28:53.806783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-12 11:28:53.806902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-12 11:28:53.807228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-12 11:28:53.807302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-12 11:28:53.807563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-12 11:28:53.807629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-12 11:28:53.807898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-12 11:28:53.807965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-12 11:28:53.808253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-12 11:28:53.808318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-12 11:28:53.808571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-12 11:28:53.808635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-12 11:28:53.808951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-12 11:28:53.809036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-12 11:28:53.809307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-12 11:28:53.809399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-12 11:28:53.809719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-12 11:28:53.809791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-12 11:28:53.810090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-12 11:28:53.810157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-12 11:28:53.810464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-12 11:28:53.810528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-12 11:28:53.810774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-12 11:28:53.810841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-12 11:28:53.811088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-12 11:28:53.811154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-12 11:28:53.811402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-12 11:28:53.811478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.717 [2024-07-12 11:28:53.811815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.717 [2024-07-12 11:28:53.811928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.717 qpair failed and we were unable to recover it. 00:24:27.997 [2024-07-12 11:28:53.812250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.997 [2024-07-12 11:28:53.812316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.997 qpair failed and we were unable to recover it. 00:24:27.997 [2024-07-12 11:28:53.812538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.997 [2024-07-12 11:28:53.812603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.997 qpair failed and we were unable to recover it. 00:24:27.997 [2024-07-12 11:28:53.812787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.997 [2024-07-12 11:28:53.812850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.997 qpair failed and we were unable to recover it. 00:24:27.997 [2024-07-12 11:28:53.813093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.997 [2024-07-12 11:28:53.813158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.997 qpair failed and we were unable to recover it. 00:24:27.997 [2024-07-12 11:28:53.813427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.997 [2024-07-12 11:28:53.813502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.997 qpair failed and we were unable to recover it. 00:24:27.997 [2024-07-12 11:28:53.813782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.997 [2024-07-12 11:28:53.813845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.997 qpair failed and we were unable to recover it. 00:24:27.997 [2024-07-12 11:28:53.814158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.997 [2024-07-12 11:28:53.814223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.997 qpair failed and we were unable to recover it. 00:24:27.997 [2024-07-12 11:28:53.814507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.997 [2024-07-12 11:28:53.814571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.997 qpair failed and we were unable to recover it. 00:24:27.997 [2024-07-12 11:28:53.814799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.997 [2024-07-12 11:28:53.814862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.997 qpair failed and we were unable to recover it. 00:24:27.997 [2024-07-12 11:28:53.815099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.997 [2024-07-12 11:28:53.815163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.997 qpair failed and we were unable to recover it. 00:24:27.997 [2024-07-12 11:28:53.815360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.997 [2024-07-12 11:28:53.815426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.997 qpair failed and we were unable to recover it. 00:24:27.997 [2024-07-12 11:28:53.815661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.997 [2024-07-12 11:28:53.815725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.997 qpair failed and we were unable to recover it. 00:24:27.997 [2024-07-12 11:28:53.815966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.997 [2024-07-12 11:28:53.816033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.997 qpair failed and we were unable to recover it. 00:24:27.997 [2024-07-12 11:28:53.816260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.997 [2024-07-12 11:28:53.816324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.997 qpair failed and we were unable to recover it. 00:24:27.997 [2024-07-12 11:28:53.816628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.997 [2024-07-12 11:28:53.816692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.997 qpair failed and we were unable to recover it. 00:24:27.997 [2024-07-12 11:28:53.817004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.997 [2024-07-12 11:28:53.817071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.997 qpair failed and we were unable to recover it. 00:24:27.997 [2024-07-12 11:28:53.817319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.997 [2024-07-12 11:28:53.817383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.997 qpair failed and we were unable to recover it. 00:24:27.997 [2024-07-12 11:28:53.817622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.997 [2024-07-12 11:28:53.817688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.997 qpair failed and we were unable to recover it. 00:24:27.997 [2024-07-12 11:28:53.817950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.997 [2024-07-12 11:28:53.818017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.997 qpair failed and we were unable to recover it. 00:24:27.997 [2024-07-12 11:28:53.818311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.997 [2024-07-12 11:28:53.818375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.997 qpair failed and we were unable to recover it. 00:24:27.997 [2024-07-12 11:28:53.818644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.997 [2024-07-12 11:28:53.818708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.997 qpair failed and we were unable to recover it. 00:24:27.997 [2024-07-12 11:28:53.818924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.998 [2024-07-12 11:28:53.818991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.998 qpair failed and we were unable to recover it. 00:24:27.998 [2024-07-12 11:28:53.819250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.998 [2024-07-12 11:28:53.819312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.998 qpair failed and we were unable to recover it. 00:24:27.998 [2024-07-12 11:28:53.819572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.998 [2024-07-12 11:28:53.819636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.998 qpair failed and we were unable to recover it. 00:24:27.998 [2024-07-12 11:28:53.819889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.998 [2024-07-12 11:28:53.819957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.998 qpair failed and we were unable to recover it. 00:24:27.998 [2024-07-12 11:28:53.820254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.998 [2024-07-12 11:28:53.820320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.998 qpair failed and we were unable to recover it. 00:24:27.998 [2024-07-12 11:28:53.820618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.998 [2024-07-12 11:28:53.820683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.998 qpair failed and we were unable to recover it. 00:24:27.998 [2024-07-12 11:28:53.820898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.998 [2024-07-12 11:28:53.820964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.998 qpair failed and we were unable to recover it. 00:24:27.998 [2024-07-12 11:28:53.821180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.998 [2024-07-12 11:28:53.821245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.998 qpair failed and we were unable to recover it. 00:24:27.998 [2024-07-12 11:28:53.821540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.998 [2024-07-12 11:28:53.821605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.998 qpair failed and we were unable to recover it. 00:24:27.998 [2024-07-12 11:28:53.821832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.998 [2024-07-12 11:28:53.821934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.998 qpair failed and we were unable to recover it. 00:24:27.998 [2024-07-12 11:28:53.822216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.998 [2024-07-12 11:28:53.822280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.998 qpair failed and we were unable to recover it. 00:24:27.998 [2024-07-12 11:28:53.822500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.998 [2024-07-12 11:28:53.822564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.998 qpair failed and we were unable to recover it. 00:24:27.998 [2024-07-12 11:28:53.822854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.998 [2024-07-12 11:28:53.822941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.998 qpair failed and we were unable to recover it. 00:24:27.998 [2024-07-12 11:28:53.823159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.998 [2024-07-12 11:28:53.823225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.998 qpair failed and we were unable to recover it. 00:24:27.998 [2024-07-12 11:28:53.823480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.998 [2024-07-12 11:28:53.823546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.998 qpair failed and we were unable to recover it. 00:24:27.998 [2024-07-12 11:28:53.823754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.998 [2024-07-12 11:28:53.823818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.998 qpair failed and we were unable to recover it. 00:24:27.998 [2024-07-12 11:28:53.824097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.998 [2024-07-12 11:28:53.824163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.998 qpair failed and we were unable to recover it. 00:24:27.998 [2024-07-12 11:28:53.824451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.998 [2024-07-12 11:28:53.824516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.998 qpair failed and we were unable to recover it. 00:24:27.998 [2024-07-12 11:28:53.824772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.998 [2024-07-12 11:28:53.824838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.998 qpair failed and we were unable to recover it. 00:24:27.998 [2024-07-12 11:28:53.825121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.998 [2024-07-12 11:28:53.825185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.998 qpair failed and we were unable to recover it. 00:24:27.998 [2024-07-12 11:28:53.825475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.998 [2024-07-12 11:28:53.825540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.998 qpair failed and we were unable to recover it. 00:24:27.998 [2024-07-12 11:28:53.825764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.998 [2024-07-12 11:28:53.825828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.998 qpair failed and we were unable to recover it. 00:24:27.998 [2024-07-12 11:28:53.826125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.998 [2024-07-12 11:28:53.826189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.998 qpair failed and we were unable to recover it. 00:24:27.998 [2024-07-12 11:28:53.826452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.998 [2024-07-12 11:28:53.826526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.998 qpair failed and we were unable to recover it. 00:24:27.998 [2024-07-12 11:28:53.826836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.998 [2024-07-12 11:28:53.826920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.998 qpair failed and we were unable to recover it. 00:24:27.998 [2024-07-12 11:28:53.827170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.998 [2024-07-12 11:28:53.827234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.998 qpair failed and we were unable to recover it. 00:24:27.998 [2024-07-12 11:28:53.827477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.998 [2024-07-12 11:28:53.827544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.998 qpair failed and we were unable to recover it. 00:24:27.998 [2024-07-12 11:28:53.827786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.998 [2024-07-12 11:28:53.827852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.998 qpair failed and we were unable to recover it. 00:24:27.998 [2024-07-12 11:28:53.828244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.998 [2024-07-12 11:28:53.828308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.998 qpair failed and we were unable to recover it. 00:24:27.998 [2024-07-12 11:28:53.828605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.998 [2024-07-12 11:28:53.828669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.998 qpair failed and we were unable to recover it. 00:24:27.998 [2024-07-12 11:28:53.828897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.998 [2024-07-12 11:28:53.828963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.998 qpair failed and we were unable to recover it. 00:24:27.998 [2024-07-12 11:28:53.829174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.998 [2024-07-12 11:28:53.829239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.998 qpair failed and we were unable to recover it. 00:24:27.998 [2024-07-12 11:28:53.829448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.998 [2024-07-12 11:28:53.829512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.998 qpair failed and we were unable to recover it. 00:24:27.998 [2024-07-12 11:28:53.829732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.998 [2024-07-12 11:28:53.829797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.998 qpair failed and we were unable to recover it. 00:24:27.998 [2024-07-12 11:28:53.830110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.998 [2024-07-12 11:28:53.830175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.998 qpair failed and we were unable to recover it. 00:24:27.998 [2024-07-12 11:28:53.830417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.998 [2024-07-12 11:28:53.830480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.998 qpair failed and we were unable to recover it. 00:24:27.998 [2024-07-12 11:28:53.830776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.998 [2024-07-12 11:28:53.830840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.999 qpair failed and we were unable to recover it. 00:24:27.999 [2024-07-12 11:28:53.831142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.999 [2024-07-12 11:28:53.831209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.999 qpair failed and we were unable to recover it. 00:24:27.999 [2024-07-12 11:28:53.831427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.999 [2024-07-12 11:28:53.831494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.999 qpair failed and we were unable to recover it. 00:24:27.999 [2024-07-12 11:28:53.831754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.999 [2024-07-12 11:28:53.831817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.999 qpair failed and we were unable to recover it. 00:24:27.999 [2024-07-12 11:28:53.832134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.999 [2024-07-12 11:28:53.832197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.999 qpair failed and we were unable to recover it. 00:24:27.999 [2024-07-12 11:28:53.832468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.999 [2024-07-12 11:28:53.832533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.999 qpair failed and we were unable to recover it. 00:24:27.999 [2024-07-12 11:28:53.832753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.999 [2024-07-12 11:28:53.832818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.999 qpair failed and we were unable to recover it. 00:24:27.999 [2024-07-12 11:28:53.833083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.999 [2024-07-12 11:28:53.833150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.999 qpair failed and we were unable to recover it. 00:24:27.999 [2024-07-12 11:28:53.833391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.999 [2024-07-12 11:28:53.833455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.999 qpair failed and we were unable to recover it. 00:24:27.999 [2024-07-12 11:28:53.833748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.999 [2024-07-12 11:28:53.833812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.999 qpair failed and we were unable to recover it. 00:24:27.999 [2024-07-12 11:28:53.834099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.999 [2024-07-12 11:28:53.834165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.999 qpair failed and we were unable to recover it. 00:24:27.999 [2024-07-12 11:28:53.834420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.999 [2024-07-12 11:28:53.834483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.999 qpair failed and we were unable to recover it. 00:24:27.999 [2024-07-12 11:28:53.834688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.999 [2024-07-12 11:28:53.834754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.999 qpair failed and we were unable to recover it. 00:24:27.999 [2024-07-12 11:28:53.835007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.999 [2024-07-12 11:28:53.835074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.999 qpair failed and we were unable to recover it. 00:24:27.999 [2024-07-12 11:28:53.835475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.999 [2024-07-12 11:28:53.835539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.999 qpair failed and we were unable to recover it. 00:24:27.999 [2024-07-12 11:28:53.835757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.999 [2024-07-12 11:28:53.835823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.999 qpair failed and we were unable to recover it. 00:24:27.999 [2024-07-12 11:28:53.836108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.999 [2024-07-12 11:28:53.836175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.999 qpair failed and we were unable to recover it. 00:24:27.999 [2024-07-12 11:28:53.836429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.999 [2024-07-12 11:28:53.836493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.999 qpair failed and we were unable to recover it. 00:24:27.999 [2024-07-12 11:28:53.836711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.999 [2024-07-12 11:28:53.836774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.999 qpair failed and we were unable to recover it. 00:24:27.999 [2024-07-12 11:28:53.837039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.999 [2024-07-12 11:28:53.837104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.999 qpair failed and we were unable to recover it. 00:24:27.999 [2024-07-12 11:28:53.837397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.999 [2024-07-12 11:28:53.837461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.999 qpair failed and we were unable to recover it. 00:24:27.999 [2024-07-12 11:28:53.837733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.999 [2024-07-12 11:28:53.837796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.999 qpair failed and we were unable to recover it. 00:24:27.999 [2024-07-12 11:28:53.838064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.999 [2024-07-12 11:28:53.838128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.999 qpair failed and we were unable to recover it. 00:24:27.999 [2024-07-12 11:28:53.838345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.999 [2024-07-12 11:28:53.838409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.999 qpair failed and we were unable to recover it. 00:24:27.999 [2024-07-12 11:28:53.838652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.999 [2024-07-12 11:28:53.838717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.999 qpair failed and we were unable to recover it. 00:24:27.999 [2024-07-12 11:28:53.838969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.999 [2024-07-12 11:28:53.839034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.999 qpair failed and we were unable to recover it. 00:24:27.999 [2024-07-12 11:28:53.839288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.999 [2024-07-12 11:28:53.839351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.999 qpair failed and we were unable to recover it. 00:24:27.999 [2024-07-12 11:28:53.839594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.999 [2024-07-12 11:28:53.839669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.999 qpair failed and we were unable to recover it. 00:24:27.999 [2024-07-12 11:28:53.839974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.999 [2024-07-12 11:28:53.840041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.999 qpair failed and we were unable to recover it. 00:24:27.999 [2024-07-12 11:28:53.840288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.999 [2024-07-12 11:28:53.840352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.999 qpair failed and we were unable to recover it. 00:24:27.999 [2024-07-12 11:28:53.840617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.999 [2024-07-12 11:28:53.840681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.999 qpair failed and we were unable to recover it. 00:24:27.999 [2024-07-12 11:28:53.840937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.999 [2024-07-12 11:28:53.841002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.999 qpair failed and we were unable to recover it. 00:24:27.999 [2024-07-12 11:28:53.841266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.999 [2024-07-12 11:28:53.841329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.999 qpair failed and we were unable to recover it. 00:24:27.999 [2024-07-12 11:28:53.841630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.999 [2024-07-12 11:28:53.841693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.999 qpair failed and we were unable to recover it. 00:24:27.999 [2024-07-12 11:28:53.841995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.999 [2024-07-12 11:28:53.842061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.999 qpair failed and we were unable to recover it. 00:24:27.999 [2024-07-12 11:28:53.842261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.999 [2024-07-12 11:28:53.842324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:27.999 qpair failed and we were unable to recover it. 00:24:27.999 [2024-07-12 11:28:53.842570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.000 [2024-07-12 11:28:53.842632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.000 qpair failed and we were unable to recover it. 00:24:28.000 [2024-07-12 11:28:53.842830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.000 [2024-07-12 11:28:53.842926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.000 qpair failed and we were unable to recover it. 00:24:28.000 [2024-07-12 11:28:53.843218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.000 [2024-07-12 11:28:53.843284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.000 qpair failed and we were unable to recover it. 00:24:28.000 [2024-07-12 11:28:53.843583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.000 [2024-07-12 11:28:53.843649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.000 qpair failed and we were unable to recover it. 00:24:28.000 [2024-07-12 11:28:53.843887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.000 [2024-07-12 11:28:53.843955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.000 qpair failed and we were unable to recover it. 00:24:28.000 [2024-07-12 11:28:53.844230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.000 [2024-07-12 11:28:53.844295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.000 qpair failed and we were unable to recover it. 00:24:28.000 [2024-07-12 11:28:53.844544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.000 [2024-07-12 11:28:53.844612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.000 qpair failed and we were unable to recover it. 00:24:28.000 [2024-07-12 11:28:53.844902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.000 [2024-07-12 11:28:53.844969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.000 qpair failed and we were unable to recover it. 00:24:28.000 [2024-07-12 11:28:53.845224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.000 [2024-07-12 11:28:53.845288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.000 qpair failed and we were unable to recover it. 00:24:28.000 [2024-07-12 11:28:53.845515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.000 [2024-07-12 11:28:53.845579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.000 qpair failed and we were unable to recover it. 00:24:28.000 [2024-07-12 11:28:53.845813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.000 [2024-07-12 11:28:53.845891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.000 qpair failed and we were unable to recover it. 00:24:28.000 [2024-07-12 11:28:53.846137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.000 [2024-07-12 11:28:53.846201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.000 qpair failed and we were unable to recover it. 00:24:28.000 [2024-07-12 11:28:53.846495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.000 [2024-07-12 11:28:53.846559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.000 qpair failed and we were unable to recover it. 00:24:28.000 [2024-07-12 11:28:53.846848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.000 [2024-07-12 11:28:53.846943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.000 qpair failed and we were unable to recover it. 00:24:28.000 [2024-07-12 11:28:53.847164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.000 [2024-07-12 11:28:53.847228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.000 qpair failed and we were unable to recover it. 00:24:28.000 [2024-07-12 11:28:53.847436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.000 [2024-07-12 11:28:53.847499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.000 qpair failed and we were unable to recover it. 00:24:28.000 [2024-07-12 11:28:53.847739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.000 [2024-07-12 11:28:53.847802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.000 qpair failed and we were unable to recover it. 00:24:28.000 [2024-07-12 11:28:53.848052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.000 [2024-07-12 11:28:53.848117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.000 qpair failed and we were unable to recover it. 00:24:28.000 [2024-07-12 11:28:53.848384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.000 [2024-07-12 11:28:53.848448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.000 qpair failed and we were unable to recover it. 00:24:28.000 [2024-07-12 11:28:53.848746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.000 [2024-07-12 11:28:53.848810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.000 qpair failed and we were unable to recover it. 00:24:28.000 [2024-07-12 11:28:53.849040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.000 [2024-07-12 11:28:53.849105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.000 qpair failed and we were unable to recover it. 00:24:28.000 [2024-07-12 11:28:53.849358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.000 [2024-07-12 11:28:53.849422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.000 qpair failed and we were unable to recover it. 00:24:28.000 [2024-07-12 11:28:53.849712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.000 [2024-07-12 11:28:53.849775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.000 qpair failed and we were unable to recover it. 00:24:28.000 [2024-07-12 11:28:53.850052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.000 [2024-07-12 11:28:53.850117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.000 qpair failed and we were unable to recover it. 00:24:28.000 [2024-07-12 11:28:53.850343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.000 [2024-07-12 11:28:53.850407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.000 qpair failed and we were unable to recover it. 00:24:28.000 [2024-07-12 11:28:53.850686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.000 [2024-07-12 11:28:53.850750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.000 qpair failed and we were unable to recover it. 00:24:28.000 [2024-07-12 11:28:53.851048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.000 [2024-07-12 11:28:53.851113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.000 qpair failed and we were unable to recover it. 00:24:28.000 [2024-07-12 11:28:53.851413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.000 [2024-07-12 11:28:53.851477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.000 qpair failed and we were unable to recover it. 00:24:28.000 [2024-07-12 11:28:53.851765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.000 [2024-07-12 11:28:53.851836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.000 qpair failed and we were unable to recover it. 00:24:28.000 [2024-07-12 11:28:53.852154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.000 [2024-07-12 11:28:53.852221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.000 qpair failed and we were unable to recover it. 00:24:28.000 [2024-07-12 11:28:53.852517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.000 [2024-07-12 11:28:53.852611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.000 qpair failed and we were unable to recover it. 00:24:28.000 [2024-07-12 11:28:53.852936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.000 [2024-07-12 11:28:53.853019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.000 qpair failed and we were unable to recover it. 00:24:28.000 [2024-07-12 11:28:53.853284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.000 [2024-07-12 11:28:53.853350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.000 qpair failed and we were unable to recover it. 00:24:28.000 [2024-07-12 11:28:53.853606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.000 [2024-07-12 11:28:53.853671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.000 qpair failed and we were unable to recover it. 00:24:28.000 [2024-07-12 11:28:53.853903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.000 [2024-07-12 11:28:53.853969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.000 qpair failed and we were unable to recover it. 00:24:28.000 [2024-07-12 11:28:53.854233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.000 [2024-07-12 11:28:53.854297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.000 qpair failed and we were unable to recover it. 00:24:28.000 [2024-07-12 11:28:53.854588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.001 [2024-07-12 11:28:53.854651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.001 qpair failed and we were unable to recover it. 00:24:28.001 [2024-07-12 11:28:53.854910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.001 [2024-07-12 11:28:53.854976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.001 qpair failed and we were unable to recover it. 00:24:28.001 [2024-07-12 11:28:53.855219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.001 [2024-07-12 11:28:53.855283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.001 qpair failed and we were unable to recover it. 00:24:28.001 [2024-07-12 11:28:53.855530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.001 [2024-07-12 11:28:53.855598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.001 qpair failed and we were unable to recover it. 00:24:28.001 [2024-07-12 11:28:53.855932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.001 [2024-07-12 11:28:53.856000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.001 qpair failed and we were unable to recover it. 00:24:28.001 [2024-07-12 11:28:53.856252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.001 [2024-07-12 11:28:53.856317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.001 qpair failed and we were unable to recover it. 00:24:28.001 [2024-07-12 11:28:53.856580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.001 [2024-07-12 11:28:53.856645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.001 qpair failed and we were unable to recover it. 00:24:28.001 [2024-07-12 11:28:53.856902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.001 [2024-07-12 11:28:53.856970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.001 qpair failed and we were unable to recover it. 00:24:28.001 [2024-07-12 11:28:53.857172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.001 [2024-07-12 11:28:53.857237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.001 qpair failed and we were unable to recover it. 00:24:28.001 [2024-07-12 11:28:53.857462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.001 [2024-07-12 11:28:53.857526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.001 qpair failed and we were unable to recover it. 00:24:28.001 [2024-07-12 11:28:53.857738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.001 [2024-07-12 11:28:53.857802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.001 qpair failed and we were unable to recover it. 00:24:28.001 [2024-07-12 11:28:53.858074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.001 [2024-07-12 11:28:53.858147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.001 qpair failed and we were unable to recover it. 00:24:28.001 [2024-07-12 11:28:53.858390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.001 [2024-07-12 11:28:53.858453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.001 qpair failed and we were unable to recover it. 00:24:28.001 [2024-07-12 11:28:53.858684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.001 [2024-07-12 11:28:53.858748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.001 qpair failed and we were unable to recover it. 00:24:28.001 [2024-07-12 11:28:53.859043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.001 [2024-07-12 11:28:53.859116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.001 qpair failed and we were unable to recover it. 00:24:28.001 [2024-07-12 11:28:53.859402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.001 [2024-07-12 11:28:53.859465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.001 qpair failed and we were unable to recover it. 00:24:28.001 [2024-07-12 11:28:53.859688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.001 [2024-07-12 11:28:53.859751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.001 qpair failed and we were unable to recover it. 00:24:28.001 [2024-07-12 11:28:53.860015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.001 [2024-07-12 11:28:53.860079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.001 qpair failed and we were unable to recover it. 00:24:28.001 [2024-07-12 11:28:53.860375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.001 [2024-07-12 11:28:53.860438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.001 qpair failed and we were unable to recover it. 00:24:28.001 [2024-07-12 11:28:53.860686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.001 [2024-07-12 11:28:53.860749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.001 qpair failed and we were unable to recover it. 00:24:28.001 [2024-07-12 11:28:53.860979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.001 [2024-07-12 11:28:53.861043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.001 qpair failed and we were unable to recover it. 00:24:28.001 [2024-07-12 11:28:53.861312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.001 [2024-07-12 11:28:53.861376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.001 qpair failed and we were unable to recover it. 00:24:28.001 [2024-07-12 11:28:53.861678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.001 [2024-07-12 11:28:53.861743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.001 qpair failed and we were unable to recover it. 00:24:28.001 [2024-07-12 11:28:53.862002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.001 [2024-07-12 11:28:53.862067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.001 qpair failed and we were unable to recover it. 00:24:28.001 [2024-07-12 11:28:53.862324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.001 [2024-07-12 11:28:53.862388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.001 qpair failed and we were unable to recover it. 00:24:28.001 [2024-07-12 11:28:53.862637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.001 [2024-07-12 11:28:53.862701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.001 qpair failed and we were unable to recover it. 00:24:28.001 [2024-07-12 11:28:53.862949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.001 [2024-07-12 11:28:53.863018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.001 qpair failed and we were unable to recover it. 00:24:28.001 [2024-07-12 11:28:53.863313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.001 [2024-07-12 11:28:53.863377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.001 qpair failed and we were unable to recover it. 00:24:28.001 [2024-07-12 11:28:53.863628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.001 [2024-07-12 11:28:53.863692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.001 qpair failed and we were unable to recover it. 00:24:28.001 [2024-07-12 11:28:53.863940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.001 [2024-07-12 11:28:53.864009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.001 qpair failed and we were unable to recover it. 00:24:28.001 [2024-07-12 11:28:53.864305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.001 [2024-07-12 11:28:53.864368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.001 qpair failed and we were unable to recover it. 00:24:28.001 [2024-07-12 11:28:53.864627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.001 [2024-07-12 11:28:53.864692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.001 qpair failed and we were unable to recover it. 00:24:28.001 [2024-07-12 11:28:53.864933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.001 [2024-07-12 11:28:53.865001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.001 qpair failed and we were unable to recover it. 00:24:28.001 [2024-07-12 11:28:53.865216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.001 [2024-07-12 11:28:53.865281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.001 qpair failed and we were unable to recover it. 00:24:28.001 [2024-07-12 11:28:53.865581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.001 [2024-07-12 11:28:53.865643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.001 qpair failed and we were unable to recover it. 00:24:28.001 [2024-07-12 11:28:53.865852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.001 [2024-07-12 11:28:53.865949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.001 qpair failed and we were unable to recover it. 00:24:28.001 [2024-07-12 11:28:53.866209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.001 [2024-07-12 11:28:53.866274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.002 qpair failed and we were unable to recover it. 00:24:28.002 [2024-07-12 11:28:53.866538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.002 [2024-07-12 11:28:53.866603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.002 qpair failed and we were unable to recover it. 00:24:28.002 [2024-07-12 11:28:53.866843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.002 [2024-07-12 11:28:53.866937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.002 qpair failed and we were unable to recover it. 00:24:28.002 [2024-07-12 11:28:53.867165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.002 [2024-07-12 11:28:53.867229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.002 qpair failed and we were unable to recover it. 00:24:28.002 [2024-07-12 11:28:53.867488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.002 [2024-07-12 11:28:53.867552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.002 qpair failed and we were unable to recover it. 00:24:28.002 [2024-07-12 11:28:53.867849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.002 [2024-07-12 11:28:53.867938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.002 qpair failed and we were unable to recover it. 00:24:28.002 [2024-07-12 11:28:53.868138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.002 [2024-07-12 11:28:53.868202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.002 qpair failed and we were unable to recover it. 00:24:28.002 [2024-07-12 11:28:53.868430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.002 [2024-07-12 11:28:53.868494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.002 qpair failed and we were unable to recover it. 00:24:28.002 [2024-07-12 11:28:53.868709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.002 [2024-07-12 11:28:53.868772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.002 qpair failed and we were unable to recover it. 00:24:28.002 [2024-07-12 11:28:53.869048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.002 [2024-07-12 11:28:53.869118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.002 qpair failed and we were unable to recover it. 00:24:28.002 [2024-07-12 11:28:53.869375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.002 [2024-07-12 11:28:53.869439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.002 qpair failed and we were unable to recover it. 00:24:28.002 [2024-07-12 11:28:53.869657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.002 [2024-07-12 11:28:53.869720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.002 qpair failed and we were unable to recover it. 00:24:28.002 [2024-07-12 11:28:53.869982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.002 [2024-07-12 11:28:53.870049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.002 qpair failed and we were unable to recover it. 00:24:28.002 [2024-07-12 11:28:53.870318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.002 [2024-07-12 11:28:53.870383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.002 qpair failed and we were unable to recover it. 00:24:28.002 [2024-07-12 11:28:53.870648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.002 [2024-07-12 11:28:53.870711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.002 qpair failed and we were unable to recover it. 00:24:28.002 [2024-07-12 11:28:53.871001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.002 [2024-07-12 11:28:53.871066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.002 qpair failed and we were unable to recover it. 00:24:28.002 [2024-07-12 11:28:53.871325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.002 [2024-07-12 11:28:53.871399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.002 qpair failed and we were unable to recover it. 00:24:28.002 [2024-07-12 11:28:53.871624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.002 [2024-07-12 11:28:53.871688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.002 qpair failed and we were unable to recover it. 00:24:28.002 [2024-07-12 11:28:53.872002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.002 [2024-07-12 11:28:53.872066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.002 qpair failed and we were unable to recover it. 00:24:28.002 [2024-07-12 11:28:53.872347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.002 [2024-07-12 11:28:53.872411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.002 qpair failed and we were unable to recover it. 00:24:28.002 [2024-07-12 11:28:53.872648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.002 [2024-07-12 11:28:53.872712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.002 qpair failed and we were unable to recover it. 00:24:28.002 [2024-07-12 11:28:53.872978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.002 [2024-07-12 11:28:53.873043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.002 qpair failed and we were unable to recover it. 00:24:28.002 [2024-07-12 11:28:53.873265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.002 [2024-07-12 11:28:53.873327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.002 qpair failed and we were unable to recover it. 00:24:28.002 [2024-07-12 11:28:53.873581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.002 [2024-07-12 11:28:53.873644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.002 qpair failed and we were unable to recover it. 00:24:28.002 [2024-07-12 11:28:53.873908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.002 [2024-07-12 11:28:53.873974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.002 qpair failed and we were unable to recover it. 00:24:28.002 [2024-07-12 11:28:53.874233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.002 [2024-07-12 11:28:53.874296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.002 qpair failed and we were unable to recover it. 00:24:28.002 [2024-07-12 11:28:53.874573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.002 [2024-07-12 11:28:53.874636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.002 qpair failed and we were unable to recover it. 00:24:28.002 [2024-07-12 11:28:53.874854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.002 [2024-07-12 11:28:53.874953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.002 qpair failed and we were unable to recover it. 00:24:28.002 [2024-07-12 11:28:53.875279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.002 [2024-07-12 11:28:53.875343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.002 qpair failed and we were unable to recover it. 00:24:28.002 [2024-07-12 11:28:53.875639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.002 [2024-07-12 11:28:53.875703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.002 qpair failed and we were unable to recover it. 00:24:28.002 [2024-07-12 11:28:53.875959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.003 [2024-07-12 11:28:53.876024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.003 qpair failed and we were unable to recover it. 00:24:28.003 [2024-07-12 11:28:53.876271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.003 [2024-07-12 11:28:53.876333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.003 qpair failed and we were unable to recover it. 00:24:28.003 [2024-07-12 11:28:53.876556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.003 [2024-07-12 11:28:53.876618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.003 qpair failed and we were unable to recover it. 00:24:28.003 [2024-07-12 11:28:53.876902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.003 [2024-07-12 11:28:53.876967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.003 qpair failed and we were unable to recover it. 00:24:28.003 [2024-07-12 11:28:53.877224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.003 [2024-07-12 11:28:53.877288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.003 qpair failed and we were unable to recover it. 00:24:28.003 [2024-07-12 11:28:53.877532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.003 [2024-07-12 11:28:53.877596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.003 qpair failed and we were unable to recover it. 00:24:28.003 [2024-07-12 11:28:53.877851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.003 [2024-07-12 11:28:53.877953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.003 qpair failed and we were unable to recover it. 00:24:28.003 [2024-07-12 11:28:53.878249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.003 [2024-07-12 11:28:53.878312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.003 qpair failed and we were unable to recover it. 00:24:28.003 [2024-07-12 11:28:53.878607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.003 [2024-07-12 11:28:53.878670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.003 qpair failed and we were unable to recover it. 00:24:28.003 [2024-07-12 11:28:53.878916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.003 [2024-07-12 11:28:53.878992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.003 qpair failed and we were unable to recover it. 00:24:28.003 [2024-07-12 11:28:53.879289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.003 [2024-07-12 11:28:53.879351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.003 qpair failed and we were unable to recover it. 00:24:28.003 [2024-07-12 11:28:53.879603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.003 [2024-07-12 11:28:53.879666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.003 qpair failed and we were unable to recover it. 00:24:28.003 [2024-07-12 11:28:53.879923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.003 [2024-07-12 11:28:53.879991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.003 qpair failed and we were unable to recover it. 00:24:28.003 [2024-07-12 11:28:53.880277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.003 [2024-07-12 11:28:53.880339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.003 qpair failed and we were unable to recover it. 00:24:28.003 [2024-07-12 11:28:53.880585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.003 [2024-07-12 11:28:53.880651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.003 qpair failed and we were unable to recover it. 00:24:28.003 [2024-07-12 11:28:53.880903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.003 [2024-07-12 11:28:53.880968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.003 qpair failed and we were unable to recover it. 00:24:28.003 [2024-07-12 11:28:53.881184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.003 [2024-07-12 11:28:53.881247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.003 qpair failed and we were unable to recover it. 00:24:28.003 [2024-07-12 11:28:53.881538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.003 [2024-07-12 11:28:53.881602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.003 qpair failed and we were unable to recover it. 00:24:28.003 [2024-07-12 11:28:53.881852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.003 [2024-07-12 11:28:53.881933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.003 qpair failed and we were unable to recover it. 00:24:28.003 [2024-07-12 11:28:53.882168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.003 [2024-07-12 11:28:53.882232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.003 qpair failed and we were unable to recover it. 00:24:28.003 [2024-07-12 11:28:53.882525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.003 [2024-07-12 11:28:53.882588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.003 qpair failed and we were unable to recover it. 00:24:28.003 [2024-07-12 11:28:53.882801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.003 [2024-07-12 11:28:53.882883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.003 qpair failed and we were unable to recover it. 00:24:28.003 [2024-07-12 11:28:53.883162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.003 [2024-07-12 11:28:53.883226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.003 qpair failed and we were unable to recover it. 00:24:28.003 [2024-07-12 11:28:53.883504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.003 [2024-07-12 11:28:53.883570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.003 qpair failed and we were unable to recover it. 00:24:28.003 [2024-07-12 11:28:53.883830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.003 [2024-07-12 11:28:53.883914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.003 qpair failed and we were unable to recover it. 00:24:28.003 [2024-07-12 11:28:53.884166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.003 [2024-07-12 11:28:53.884229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.003 qpair failed and we were unable to recover it. 00:24:28.003 [2024-07-12 11:28:53.884455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.003 [2024-07-12 11:28:53.884520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.003 qpair failed and we were unable to recover it. 00:24:28.003 [2024-07-12 11:28:53.884779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.003 [2024-07-12 11:28:53.884842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.003 qpair failed and we were unable to recover it. 00:24:28.003 [2024-07-12 11:28:53.885174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.003 [2024-07-12 11:28:53.885238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.003 qpair failed and we were unable to recover it. 00:24:28.003 [2024-07-12 11:28:53.885536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.003 [2024-07-12 11:28:53.885600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.003 qpair failed and we were unable to recover it. 00:24:28.003 [2024-07-12 11:28:53.885858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.003 [2024-07-12 11:28:53.885965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.003 qpair failed and we were unable to recover it. 00:24:28.003 [2024-07-12 11:28:53.886266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.003 [2024-07-12 11:28:53.886329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.003 qpair failed and we were unable to recover it. 00:24:28.003 [2024-07-12 11:28:53.886582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.003 [2024-07-12 11:28:53.886645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.003 qpair failed and we were unable to recover it. 00:24:28.003 [2024-07-12 11:28:53.886941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.003 [2024-07-12 11:28:53.887006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.003 qpair failed and we were unable to recover it. 00:24:28.003 [2024-07-12 11:28:53.887219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.003 [2024-07-12 11:28:53.887285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.003 qpair failed and we were unable to recover it. 00:24:28.003 [2024-07-12 11:28:53.887536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.003 [2024-07-12 11:28:53.887598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.003 qpair failed and we were unable to recover it. 00:24:28.003 [2024-07-12 11:28:53.887885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.003 [2024-07-12 11:28:53.887952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.004 qpair failed and we were unable to recover it. 00:24:28.004 [2024-07-12 11:28:53.888240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.004 [2024-07-12 11:28:53.888302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.004 qpair failed and we were unable to recover it. 00:24:28.004 [2024-07-12 11:28:53.888594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.004 [2024-07-12 11:28:53.888657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.004 qpair failed and we were unable to recover it. 00:24:28.004 [2024-07-12 11:28:53.888960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.004 [2024-07-12 11:28:53.889025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.004 qpair failed and we were unable to recover it. 00:24:28.004 [2024-07-12 11:28:53.889287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.004 [2024-07-12 11:28:53.889351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.004 qpair failed and we were unable to recover it. 00:24:28.004 [2024-07-12 11:28:53.889645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.004 [2024-07-12 11:28:53.889708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.004 qpair failed and we were unable to recover it. 00:24:28.004 [2024-07-12 11:28:53.889969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.004 [2024-07-12 11:28:53.890034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.004 qpair failed and we were unable to recover it. 00:24:28.004 [2024-07-12 11:28:53.890322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.004 [2024-07-12 11:28:53.890385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.004 qpair failed and we were unable to recover it. 00:24:28.004 [2024-07-12 11:28:53.890679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.004 [2024-07-12 11:28:53.890742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.004 qpair failed and we were unable to recover it. 00:24:28.004 [2024-07-12 11:28:53.891023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.004 [2024-07-12 11:28:53.891089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.004 qpair failed and we were unable to recover it. 00:24:28.004 [2024-07-12 11:28:53.891378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.004 [2024-07-12 11:28:53.891441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.004 qpair failed and we were unable to recover it. 00:24:28.004 [2024-07-12 11:28:53.891699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.004 [2024-07-12 11:28:53.891762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.004 qpair failed and we were unable to recover it. 00:24:28.004 [2024-07-12 11:28:53.892063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.004 [2024-07-12 11:28:53.892129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.004 qpair failed and we were unable to recover it. 00:24:28.004 [2024-07-12 11:28:53.892384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.004 [2024-07-12 11:28:53.892457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.004 qpair failed and we were unable to recover it. 00:24:28.004 [2024-07-12 11:28:53.892715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.004 [2024-07-12 11:28:53.892781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.004 qpair failed and we were unable to recover it. 00:24:28.004 [2024-07-12 11:28:53.893102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.004 [2024-07-12 11:28:53.893168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.004 qpair failed and we were unable to recover it. 00:24:28.004 [2024-07-12 11:28:53.893371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.004 [2024-07-12 11:28:53.893437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.004 qpair failed and we were unable to recover it. 00:24:28.004 [2024-07-12 11:28:53.893693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.004 [2024-07-12 11:28:53.893757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.004 qpair failed and we were unable to recover it. 00:24:28.004 [2024-07-12 11:28:53.894018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.004 [2024-07-12 11:28:53.894083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.004 qpair failed and we were unable to recover it. 00:24:28.004 [2024-07-12 11:28:53.894330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.004 [2024-07-12 11:28:53.894397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.004 qpair failed and we were unable to recover it. 00:24:28.004 [2024-07-12 11:28:53.894686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.004 [2024-07-12 11:28:53.894751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.004 qpair failed and we were unable to recover it. 00:24:28.004 [2024-07-12 11:28:53.895021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.004 [2024-07-12 11:28:53.895086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.004 qpair failed and we were unable to recover it. 00:24:28.004 [2024-07-12 11:28:53.895383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.004 [2024-07-12 11:28:53.895446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.004 qpair failed and we were unable to recover it. 00:24:28.004 [2024-07-12 11:28:53.895715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.004 [2024-07-12 11:28:53.895777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.004 qpair failed and we were unable to recover it. 00:24:28.004 [2024-07-12 11:28:53.896044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.004 [2024-07-12 11:28:53.896108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.004 qpair failed and we were unable to recover it. 00:24:28.004 [2024-07-12 11:28:53.896395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.004 [2024-07-12 11:28:53.896459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.004 qpair failed and we were unable to recover it. 00:24:28.004 [2024-07-12 11:28:53.896708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.004 [2024-07-12 11:28:53.896775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.004 qpair failed and we were unable to recover it. 00:24:28.004 [2024-07-12 11:28:53.897082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.004 [2024-07-12 11:28:53.897147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.004 qpair failed and we were unable to recover it. 00:24:28.004 [2024-07-12 11:28:53.897446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.004 [2024-07-12 11:28:53.897510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.004 qpair failed and we were unable to recover it. 00:24:28.004 [2024-07-12 11:28:53.897724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.004 [2024-07-12 11:28:53.897790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.004 qpair failed and we were unable to recover it. 00:24:28.004 [2024-07-12 11:28:53.898099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.004 [2024-07-12 11:28:53.898165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.004 qpair failed and we were unable to recover it. 00:24:28.004 [2024-07-12 11:28:53.898474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.004 [2024-07-12 11:28:53.898538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.004 qpair failed and we were unable to recover it. 00:24:28.004 [2024-07-12 11:28:53.898826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.004 [2024-07-12 11:28:53.898909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.004 qpair failed and we were unable to recover it. 00:24:28.004 [2024-07-12 11:28:53.899157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.004 [2024-07-12 11:28:53.899221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.004 qpair failed and we were unable to recover it. 00:24:28.004 [2024-07-12 11:28:53.899482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.004 [2024-07-12 11:28:53.899546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.004 qpair failed and we were unable to recover it. 00:24:28.004 [2024-07-12 11:28:53.899840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.004 [2024-07-12 11:28:53.899924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.004 qpair failed and we were unable to recover it. 00:24:28.004 [2024-07-12 11:28:53.900240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.004 [2024-07-12 11:28:53.900304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.004 qpair failed and we were unable to recover it. 00:24:28.005 [2024-07-12 11:28:53.900560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.005 [2024-07-12 11:28:53.900624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.005 qpair failed and we were unable to recover it. 00:24:28.005 [2024-07-12 11:28:53.900914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.005 [2024-07-12 11:28:53.900980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.005 qpair failed and we were unable to recover it. 00:24:28.005 [2024-07-12 11:28:53.901194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.005 [2024-07-12 11:28:53.901261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.005 qpair failed and we were unable to recover it. 00:24:28.005 [2024-07-12 11:28:53.901556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.005 [2024-07-12 11:28:53.901621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.005 qpair failed and we were unable to recover it. 00:24:28.005 [2024-07-12 11:28:53.901933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.005 [2024-07-12 11:28:53.901969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.005 qpair failed and we were unable to recover it. 00:24:28.005 [2024-07-12 11:28:53.902115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.005 [2024-07-12 11:28:53.902146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.005 qpair failed and we were unable to recover it. 00:24:28.005 [2024-07-12 11:28:53.902343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.005 [2024-07-12 11:28:53.902409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.005 qpair failed and we were unable to recover it. 00:24:28.005 [2024-07-12 11:28:53.902668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.005 [2024-07-12 11:28:53.902701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.005 qpair failed and we were unable to recover it. 00:24:28.005 [2024-07-12 11:28:53.902938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.005 [2024-07-12 11:28:53.902992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.005 qpair failed and we were unable to recover it. 00:24:28.005 [2024-07-12 11:28:53.903101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.005 [2024-07-12 11:28:53.903137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.005 qpair failed and we were unable to recover it. 00:24:28.005 [2024-07-12 11:28:53.903286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.005 [2024-07-12 11:28:53.903348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.005 qpair failed and we were unable to recover it. 00:24:28.005 [2024-07-12 11:28:53.903599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.005 [2024-07-12 11:28:53.903632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.005 qpair failed and we were unable to recover it. 00:24:28.005 [2024-07-12 11:28:53.903751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.005 [2024-07-12 11:28:53.903821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.005 qpair failed and we were unable to recover it. 00:24:28.005 [2024-07-12 11:28:53.904073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.005 [2024-07-12 11:28:53.904106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.005 qpair failed and we were unable to recover it. 00:24:28.005 [2024-07-12 11:28:53.904223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.005 [2024-07-12 11:28:53.904298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.005 qpair failed and we were unable to recover it. 00:24:28.005 [2024-07-12 11:28:53.904474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.005 [2024-07-12 11:28:53.904506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.005 qpair failed and we were unable to recover it. 00:24:28.005 [2024-07-12 11:28:53.904624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.005 [2024-07-12 11:28:53.904663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.005 qpair failed and we were unable to recover it. 00:24:28.005 [2024-07-12 11:28:53.904962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.005 [2024-07-12 11:28:53.905029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.005 qpair failed and we were unable to recover it. 00:24:28.005 [2024-07-12 11:28:53.905295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.005 [2024-07-12 11:28:53.905358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.005 qpair failed and we were unable to recover it. 00:24:28.005 [2024-07-12 11:28:53.905570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.005 [2024-07-12 11:28:53.905649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.005 qpair failed and we were unable to recover it. 00:24:28.005 [2024-07-12 11:28:53.905917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.005 [2024-07-12 11:28:53.905977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.005 qpair failed and we were unable to recover it. 00:24:28.005 [2024-07-12 11:28:53.906205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.005 [2024-07-12 11:28:53.906283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.005 qpair failed and we were unable to recover it. 00:24:28.005 [2024-07-12 11:28:53.906513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.005 [2024-07-12 11:28:53.906594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.005 qpair failed and we were unable to recover it. 00:24:28.005 [2024-07-12 11:28:53.906840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.005 [2024-07-12 11:28:53.906919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.005 qpair failed and we were unable to recover it. 00:24:28.005 [2024-07-12 11:28:53.907120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.005 [2024-07-12 11:28:53.907175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.005 qpair failed and we were unable to recover it. 00:24:28.005 [2024-07-12 11:28:53.907434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.005 [2024-07-12 11:28:53.907492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.005 qpair failed and we were unable to recover it. 00:24:28.005 [2024-07-12 11:28:53.907702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.005 [2024-07-12 11:28:53.907759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.005 qpair failed and we were unable to recover it. 00:24:28.005 [2024-07-12 11:28:53.907983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.005 [2024-07-12 11:28:53.908042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.005 qpair failed and we were unable to recover it. 00:24:28.005 [2024-07-12 11:28:53.908240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.005 [2024-07-12 11:28:53.908326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.005 qpair failed and we were unable to recover it. 00:24:28.005 [2024-07-12 11:28:53.908613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.005 [2024-07-12 11:28:53.908675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.005 qpair failed and we were unable to recover it. 00:24:28.005 [2024-07-12 11:28:53.908982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.005 [2024-07-12 11:28:53.909042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.005 qpair failed and we were unable to recover it. 00:24:28.005 [2024-07-12 11:28:53.909330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.005 [2024-07-12 11:28:53.909388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.005 qpair failed and we were unable to recover it. 00:24:28.005 [2024-07-12 11:28:53.909646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.005 [2024-07-12 11:28:53.909707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.005 qpair failed and we were unable to recover it. 00:24:28.005 [2024-07-12 11:28:53.909964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.005 [2024-07-12 11:28:53.910026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.005 qpair failed and we were unable to recover it. 00:24:28.005 [2024-07-12 11:28:53.910315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.005 [2024-07-12 11:28:53.910377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.005 qpair failed and we were unable to recover it. 00:24:28.005 [2024-07-12 11:28:53.910677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.005 [2024-07-12 11:28:53.910739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.005 qpair failed and we were unable to recover it. 00:24:28.005 [2024-07-12 11:28:53.911043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.006 [2024-07-12 11:28:53.911103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.006 qpair failed and we were unable to recover it. 00:24:28.006 [2024-07-12 11:28:53.911407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.006 [2024-07-12 11:28:53.911470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.006 qpair failed and we were unable to recover it. 00:24:28.006 [2024-07-12 11:28:53.911752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.006 [2024-07-12 11:28:53.911814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.006 qpair failed and we were unable to recover it. 00:24:28.006 [2024-07-12 11:28:53.912068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.006 [2024-07-12 11:28:53.912127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.006 qpair failed and we were unable to recover it. 00:24:28.006 [2024-07-12 11:28:53.912432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.006 [2024-07-12 11:28:53.912495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.006 qpair failed and we were unable to recover it. 00:24:28.006 [2024-07-12 11:28:53.912704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.006 [2024-07-12 11:28:53.912766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.006 qpair failed and we were unable to recover it. 00:24:28.006 [2024-07-12 11:28:53.913125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.006 [2024-07-12 11:28:53.913191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.006 qpair failed and we were unable to recover it. 00:24:28.006 [2024-07-12 11:28:53.913503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.006 [2024-07-12 11:28:53.913565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.006 qpair failed and we were unable to recover it. 00:24:28.006 [2024-07-12 11:28:53.913857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.006 [2024-07-12 11:28:53.913938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.006 qpair failed and we were unable to recover it. 00:24:28.006 [2024-07-12 11:28:53.914200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.006 [2024-07-12 11:28:53.914263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.006 qpair failed and we were unable to recover it. 00:24:28.006 [2024-07-12 11:28:53.914488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.006 [2024-07-12 11:28:53.914554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.006 qpair failed and we were unable to recover it. 00:24:28.006 [2024-07-12 11:28:53.914848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.006 [2024-07-12 11:28:53.914932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.006 qpair failed and we were unable to recover it. 00:24:28.006 [2024-07-12 11:28:53.915148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.006 [2024-07-12 11:28:53.915213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.006 qpair failed and we were unable to recover it. 00:24:28.006 [2024-07-12 11:28:53.915504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.006 [2024-07-12 11:28:53.915567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.006 qpair failed and we were unable to recover it. 00:24:28.006 [2024-07-12 11:28:53.915821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.006 [2024-07-12 11:28:53.915917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.006 qpair failed and we were unable to recover it. 00:24:28.006 [2024-07-12 11:28:53.916227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.006 [2024-07-12 11:28:53.916291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.006 qpair failed and we were unable to recover it. 00:24:28.006 [2024-07-12 11:28:53.916508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.006 [2024-07-12 11:28:53.916573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.006 qpair failed and we were unable to recover it. 00:24:28.006 [2024-07-12 11:28:53.916826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.006 [2024-07-12 11:28:53.916910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.006 qpair failed and we were unable to recover it. 00:24:28.006 [2024-07-12 11:28:53.917156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.006 [2024-07-12 11:28:53.917218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.006 qpair failed and we were unable to recover it. 00:24:28.006 [2024-07-12 11:28:53.917508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.006 [2024-07-12 11:28:53.917571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.006 qpair failed and we were unable to recover it. 00:24:28.006 [2024-07-12 11:28:53.917830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.006 [2024-07-12 11:28:53.917923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.006 qpair failed and we were unable to recover it. 00:24:28.006 [2024-07-12 11:28:53.918184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.006 [2024-07-12 11:28:53.918246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.006 qpair failed and we were unable to recover it. 00:24:28.006 [2024-07-12 11:28:53.918538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.006 [2024-07-12 11:28:53.918600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.006 qpair failed and we were unable to recover it. 00:24:28.006 [2024-07-12 11:28:53.918847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.006 [2024-07-12 11:28:53.918936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.006 qpair failed and we were unable to recover it. 00:24:28.006 [2024-07-12 11:28:53.919238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.006 [2024-07-12 11:28:53.919302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.006 qpair failed and we were unable to recover it. 00:24:28.006 [2024-07-12 11:28:53.919494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.006 [2024-07-12 11:28:53.919556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.006 qpair failed and we were unable to recover it. 00:24:28.006 [2024-07-12 11:28:53.919809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.006 [2024-07-12 11:28:53.919910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.006 qpair failed and we were unable to recover it. 00:24:28.006 [2024-07-12 11:28:53.920222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.006 [2024-07-12 11:28:53.920290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.006 qpair failed and we were unable to recover it. 00:24:28.006 [2024-07-12 11:28:53.920569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.006 [2024-07-12 11:28:53.920631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.006 qpair failed and we were unable to recover it. 00:24:28.006 [2024-07-12 11:28:53.920940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.006 [2024-07-12 11:28:53.921005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.006 qpair failed and we were unable to recover it. 00:24:28.006 [2024-07-12 11:28:53.921298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.006 [2024-07-12 11:28:53.921361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.006 qpair failed and we were unable to recover it. 00:24:28.006 [2024-07-12 11:28:53.921606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.006 [2024-07-12 11:28:53.921671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.006 qpair failed and we were unable to recover it. 00:24:28.006 [2024-07-12 11:28:53.921963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.006 [2024-07-12 11:28:53.922028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.006 qpair failed and we were unable to recover it. 00:24:28.006 [2024-07-12 11:28:53.922288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.006 [2024-07-12 11:28:53.922361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.007 qpair failed and we were unable to recover it. 00:24:28.007 [2024-07-12 11:28:53.922597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.007 [2024-07-12 11:28:53.922660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.007 qpair failed and we were unable to recover it. 00:24:28.007 [2024-07-12 11:28:53.922920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.007 [2024-07-12 11:28:53.922985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.007 qpair failed and we were unable to recover it. 00:24:28.007 [2024-07-12 11:28:53.923237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.007 [2024-07-12 11:28:53.923299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.007 qpair failed and we were unable to recover it. 00:24:28.007 [2024-07-12 11:28:53.923584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.007 [2024-07-12 11:28:53.923646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.007 qpair failed and we were unable to recover it. 00:24:28.007 [2024-07-12 11:28:53.923894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.007 [2024-07-12 11:28:53.923958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.007 qpair failed and we were unable to recover it. 00:24:28.007 [2024-07-12 11:28:53.924205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.007 [2024-07-12 11:28:53.924267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.007 qpair failed and we were unable to recover it. 00:24:28.007 [2024-07-12 11:28:53.924567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.007 [2024-07-12 11:28:53.924630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.007 qpair failed and we were unable to recover it. 00:24:28.007 [2024-07-12 11:28:53.924926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.007 [2024-07-12 11:28:53.924993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.007 qpair failed and we were unable to recover it. 00:24:28.007 [2024-07-12 11:28:53.925244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.007 [2024-07-12 11:28:53.925307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.007 qpair failed and we were unable to recover it. 00:24:28.007 [2024-07-12 11:28:53.925592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.007 [2024-07-12 11:28:53.925655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.007 qpair failed and we were unable to recover it. 00:24:28.007 [2024-07-12 11:28:53.925958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.007 [2024-07-12 11:28:53.926022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.007 qpair failed and we were unable to recover it. 00:24:28.007 [2024-07-12 11:28:53.926277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.007 [2024-07-12 11:28:53.926340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.007 qpair failed and we were unable to recover it. 00:24:28.007 [2024-07-12 11:28:53.926556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.007 [2024-07-12 11:28:53.926618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.007 qpair failed and we were unable to recover it. 00:24:28.007 [2024-07-12 11:28:53.926910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.007 [2024-07-12 11:28:53.926974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.007 qpair failed and we were unable to recover it. 00:24:28.007 [2024-07-12 11:28:53.927219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.007 [2024-07-12 11:28:53.927280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.007 qpair failed and we were unable to recover it. 00:24:28.007 [2024-07-12 11:28:53.927560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.007 [2024-07-12 11:28:53.927623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.007 qpair failed and we were unable to recover it. 00:24:28.007 [2024-07-12 11:28:53.927877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.007 [2024-07-12 11:28:53.927942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.007 qpair failed and we were unable to recover it. 00:24:28.007 [2024-07-12 11:28:53.928241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.007 [2024-07-12 11:28:53.928305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.007 qpair failed and we were unable to recover it. 00:24:28.007 [2024-07-12 11:28:53.928567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.007 [2024-07-12 11:28:53.928631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.007 qpair failed and we were unable to recover it. 00:24:28.007 [2024-07-12 11:28:53.928890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.007 [2024-07-12 11:28:53.928954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.007 qpair failed and we were unable to recover it. 00:24:28.007 [2024-07-12 11:28:53.929205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.007 [2024-07-12 11:28:53.929270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.007 qpair failed and we were unable to recover it. 00:24:28.007 [2024-07-12 11:28:53.929535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.007 [2024-07-12 11:28:53.929597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.007 qpair failed and we were unable to recover it. 00:24:28.007 [2024-07-12 11:28:53.929922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.007 [2024-07-12 11:28:53.929988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.007 qpair failed and we were unable to recover it. 00:24:28.007 [2024-07-12 11:28:53.930233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.007 [2024-07-12 11:28:53.930310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.007 qpair failed and we were unable to recover it. 00:24:28.007 [2024-07-12 11:28:53.930605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.007 [2024-07-12 11:28:53.930668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.007 qpair failed and we were unable to recover it. 00:24:28.007 [2024-07-12 11:28:53.930898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.007 [2024-07-12 11:28:53.930962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.007 qpair failed and we were unable to recover it. 00:24:28.007 [2024-07-12 11:28:53.931224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.007 [2024-07-12 11:28:53.931296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.007 qpair failed and we were unable to recover it. 00:24:28.007 [2024-07-12 11:28:53.931536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.007 [2024-07-12 11:28:53.931599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.007 qpair failed and we were unable to recover it. 00:24:28.007 [2024-07-12 11:28:53.931825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.007 [2024-07-12 11:28:53.931903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.007 qpair failed and we were unable to recover it. 00:24:28.007 [2024-07-12 11:28:53.932192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.007 [2024-07-12 11:28:53.932254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.007 qpair failed and we were unable to recover it. 00:24:28.007 [2024-07-12 11:28:53.932551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.007 [2024-07-12 11:28:53.932612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.007 qpair failed and we were unable to recover it. 00:24:28.007 [2024-07-12 11:28:53.932856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.007 [2024-07-12 11:28:53.932934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.007 qpair failed and we were unable to recover it. 00:24:28.007 [2024-07-12 11:28:53.933228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.007 [2024-07-12 11:28:53.933291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.007 qpair failed and we were unable to recover it. 00:24:28.007 [2024-07-12 11:28:53.933598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.008 [2024-07-12 11:28:53.933659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.008 qpair failed and we were unable to recover it. 00:24:28.008 [2024-07-12 11:28:53.933945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.008 [2024-07-12 11:28:53.934010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.008 qpair failed and we were unable to recover it. 00:24:28.008 [2024-07-12 11:28:53.934318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.008 [2024-07-12 11:28:53.934380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.008 qpair failed and we were unable to recover it. 00:24:28.008 [2024-07-12 11:28:53.934621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.008 [2024-07-12 11:28:53.934683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.008 qpair failed and we were unable to recover it. 00:24:28.008 [2024-07-12 11:28:53.934987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.008 [2024-07-12 11:28:53.935053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.008 qpair failed and we were unable to recover it. 00:24:28.008 [2024-07-12 11:28:53.935359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.008 [2024-07-12 11:28:53.935421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.008 qpair failed and we were unable to recover it. 00:24:28.008 [2024-07-12 11:28:53.935716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.008 [2024-07-12 11:28:53.935778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.008 qpair failed and we were unable to recover it. 00:24:28.008 [2024-07-12 11:28:53.936055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.008 [2024-07-12 11:28:53.936122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.008 qpair failed and we were unable to recover it. 00:24:28.008 [2024-07-12 11:28:53.936375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.008 [2024-07-12 11:28:53.936442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.008 qpair failed and we were unable to recover it. 00:24:28.008 [2024-07-12 11:28:53.936748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.008 [2024-07-12 11:28:53.936811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.008 qpair failed and we were unable to recover it. 00:24:28.008 [2024-07-12 11:28:53.937131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.008 [2024-07-12 11:28:53.937194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.008 qpair failed and we were unable to recover it. 00:24:28.008 [2024-07-12 11:28:53.937445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.008 [2024-07-12 11:28:53.937514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.008 qpair failed and we were unable to recover it. 00:24:28.008 [2024-07-12 11:28:53.937805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.008 [2024-07-12 11:28:53.937884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.008 qpair failed and we were unable to recover it. 00:24:28.008 [2024-07-12 11:28:53.938176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.008 [2024-07-12 11:28:53.938241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.008 qpair failed and we were unable to recover it. 00:24:28.008 [2024-07-12 11:28:53.938492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.008 [2024-07-12 11:28:53.938554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.008 qpair failed and we were unable to recover it. 00:24:28.008 [2024-07-12 11:28:53.938802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.008 [2024-07-12 11:28:53.938896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.008 qpair failed and we were unable to recover it. 00:24:28.008 [2024-07-12 11:28:53.939148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.008 [2024-07-12 11:28:53.939212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.008 qpair failed and we were unable to recover it. 00:24:28.008 [2024-07-12 11:28:53.939421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.008 [2024-07-12 11:28:53.939484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.008 qpair failed and we were unable to recover it. 00:24:28.008 [2024-07-12 11:28:53.939773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.008 [2024-07-12 11:28:53.939837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.008 qpair failed and we were unable to recover it. 00:24:28.008 [2024-07-12 11:28:53.940120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.008 [2024-07-12 11:28:53.940186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.008 qpair failed and we were unable to recover it. 00:24:28.008 [2024-07-12 11:28:53.940488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.008 [2024-07-12 11:28:53.940562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.008 qpair failed and we were unable to recover it. 00:24:28.008 [2024-07-12 11:28:53.940854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.008 [2024-07-12 11:28:53.940935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.008 qpair failed and we were unable to recover it. 00:24:28.008 [2024-07-12 11:28:53.941192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.008 [2024-07-12 11:28:53.941266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.008 qpair failed and we were unable to recover it. 00:24:28.008 [2024-07-12 11:28:53.941521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.008 [2024-07-12 11:28:53.941584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.008 qpair failed and we were unable to recover it. 00:24:28.008 [2024-07-12 11:28:53.941830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.008 [2024-07-12 11:28:53.941938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.008 qpair failed and we were unable to recover it. 00:24:28.008 [2024-07-12 11:28:53.942248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.008 [2024-07-12 11:28:53.942310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.008 qpair failed and we were unable to recover it. 00:24:28.008 [2024-07-12 11:28:53.942616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.008 [2024-07-12 11:28:53.942679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.008 qpair failed and we were unable to recover it. 00:24:28.008 [2024-07-12 11:28:53.942938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.008 [2024-07-12 11:28:53.943004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.008 qpair failed and we were unable to recover it. 00:24:28.008 [2024-07-12 11:28:53.943242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.008 [2024-07-12 11:28:53.943304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.008 qpair failed and we were unable to recover it. 00:24:28.008 [2024-07-12 11:28:53.943555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.008 [2024-07-12 11:28:53.943622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.008 qpair failed and we were unable to recover it. 00:24:28.008 [2024-07-12 11:28:53.943823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.008 [2024-07-12 11:28:53.943910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.008 qpair failed and we were unable to recover it. 00:24:28.008 [2024-07-12 11:28:53.944174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.008 [2024-07-12 11:28:53.944246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.008 qpair failed and we were unable to recover it. 00:24:28.008 [2024-07-12 11:28:53.944495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.008 [2024-07-12 11:28:53.944559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.008 qpair failed and we were unable to recover it. 00:24:28.008 [2024-07-12 11:28:53.944850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.008 [2024-07-12 11:28:53.944933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.008 qpair failed and we were unable to recover it. 00:24:28.008 [2024-07-12 11:28:53.945213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.008 [2024-07-12 11:28:53.945276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.008 qpair failed and we were unable to recover it. 00:24:28.008 [2024-07-12 11:28:53.945563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.008 [2024-07-12 11:28:53.945626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.008 qpair failed and we were unable to recover it. 00:24:28.008 [2024-07-12 11:28:53.945881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.009 [2024-07-12 11:28:53.945945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.009 qpair failed and we were unable to recover it. 00:24:28.009 [2024-07-12 11:28:53.946197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.009 [2024-07-12 11:28:53.946260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.009 qpair failed and we were unable to recover it. 00:24:28.009 [2024-07-12 11:28:53.946449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.009 [2024-07-12 11:28:53.946510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.009 qpair failed and we were unable to recover it. 00:24:28.009 [2024-07-12 11:28:53.946764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.009 [2024-07-12 11:28:53.946826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.009 qpair failed and we were unable to recover it. 00:24:28.009 [2024-07-12 11:28:53.947096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.009 [2024-07-12 11:28:53.947160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.009 qpair failed and we were unable to recover it. 00:24:28.009 [2024-07-12 11:28:53.947426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.009 [2024-07-12 11:28:53.947488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.009 qpair failed and we were unable to recover it. 00:24:28.009 [2024-07-12 11:28:53.947734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.009 [2024-07-12 11:28:53.947797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.009 qpair failed and we were unable to recover it. 00:24:28.009 [2024-07-12 11:28:53.948027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.009 [2024-07-12 11:28:53.948091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.009 qpair failed and we were unable to recover it. 00:24:28.009 [2024-07-12 11:28:53.948377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.009 [2024-07-12 11:28:53.948439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.009 qpair failed and we were unable to recover it. 00:24:28.009 [2024-07-12 11:28:53.948733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.009 [2024-07-12 11:28:53.948796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.009 qpair failed and we were unable to recover it. 00:24:28.009 [2024-07-12 11:28:53.949093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.009 [2024-07-12 11:28:53.949158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.009 qpair failed and we were unable to recover it. 00:24:28.009 [2024-07-12 11:28:53.949415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.009 [2024-07-12 11:28:53.949478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.009 qpair failed and we were unable to recover it. 00:24:28.009 [2024-07-12 11:28:53.949705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.009 [2024-07-12 11:28:53.949767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.009 qpair failed and we were unable to recover it. 00:24:28.009 [2024-07-12 11:28:53.950079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.009 [2024-07-12 11:28:53.950146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.009 qpair failed and we were unable to recover it. 00:24:28.009 [2024-07-12 11:28:53.950445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.009 [2024-07-12 11:28:53.950508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.009 qpair failed and we were unable to recover it. 00:24:28.009 [2024-07-12 11:28:53.950822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.009 [2024-07-12 11:28:53.950921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.009 qpair failed and we were unable to recover it. 00:24:28.009 [2024-07-12 11:28:53.951176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.009 [2024-07-12 11:28:53.951239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.009 qpair failed and we were unable to recover it. 00:24:28.009 [2024-07-12 11:28:53.951428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.009 [2024-07-12 11:28:53.951470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.009 qpair failed and we were unable to recover it. 00:24:28.009 [2024-07-12 11:28:53.951616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.009 [2024-07-12 11:28:53.951658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.009 qpair failed and we were unable to recover it. 00:24:28.009 [2024-07-12 11:28:53.951793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.009 [2024-07-12 11:28:53.951835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.009 qpair failed and we were unable to recover it. 00:24:28.009 [2024-07-12 11:28:53.952026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.009 [2024-07-12 11:28:53.952068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.009 qpair failed and we were unable to recover it. 00:24:28.009 [2024-07-12 11:28:53.952254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.009 [2024-07-12 11:28:53.952296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.009 qpair failed and we were unable to recover it. 00:24:28.009 [2024-07-12 11:28:53.952539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.009 [2024-07-12 11:28:53.952602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.009 qpair failed and we were unable to recover it. 00:24:28.009 [2024-07-12 11:28:53.952844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.009 [2024-07-12 11:28:53.952926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.009 qpair failed and we were unable to recover it. 00:24:28.009 [2024-07-12 11:28:53.953135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.009 [2024-07-12 11:28:53.953217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.009 qpair failed and we were unable to recover it. 00:24:28.009 [2024-07-12 11:28:53.953517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.009 [2024-07-12 11:28:53.953579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.009 qpair failed and we were unable to recover it. 00:24:28.009 [2024-07-12 11:28:53.953806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.009 [2024-07-12 11:28:53.953892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.009 qpair failed and we were unable to recover it. 00:24:28.009 [2024-07-12 11:28:53.954115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.009 [2024-07-12 11:28:53.954179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.009 qpair failed and we were unable to recover it. 00:24:28.009 [2024-07-12 11:28:53.954423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.009 [2024-07-12 11:28:53.954488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.009 qpair failed and we were unable to recover it. 00:24:28.009 [2024-07-12 11:28:53.954733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.009 [2024-07-12 11:28:53.954799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.009 qpair failed and we were unable to recover it. 00:24:28.009 [2024-07-12 11:28:53.955077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.009 [2024-07-12 11:28:53.955141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.009 qpair failed and we were unable to recover it. 00:24:28.009 [2024-07-12 11:28:53.955384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.009 [2024-07-12 11:28:53.955427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.009 qpair failed and we were unable to recover it. 00:24:28.009 [2024-07-12 11:28:53.955596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.009 [2024-07-12 11:28:53.955665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.009 qpair failed and we were unable to recover it. 00:24:28.009 [2024-07-12 11:28:53.955928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.009 [2024-07-12 11:28:53.955994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.009 qpair failed and we were unable to recover it. 00:24:28.009 [2024-07-12 11:28:53.956175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.009 [2024-07-12 11:28:53.956249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.009 qpair failed and we were unable to recover it. 00:24:28.009 [2024-07-12 11:28:53.956443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.009 [2024-07-12 11:28:53.956508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.009 qpair failed and we were unable to recover it. 00:24:28.009 [2024-07-12 11:28:53.956785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.010 [2024-07-12 11:28:53.956849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.010 qpair failed and we were unable to recover it. 00:24:28.010 [2024-07-12 11:28:53.957124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.010 [2024-07-12 11:28:53.957187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.010 qpair failed and we were unable to recover it. 00:24:28.010 [2024-07-12 11:28:53.957452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.010 [2024-07-12 11:28:53.957514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.010 qpair failed and we were unable to recover it. 00:24:28.010 [2024-07-12 11:28:53.957762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.010 [2024-07-12 11:28:53.957828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.010 qpair failed and we were unable to recover it. 00:24:28.010 [2024-07-12 11:28:53.958146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.010 [2024-07-12 11:28:53.958209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.010 qpair failed and we were unable to recover it. 00:24:28.010 [2024-07-12 11:28:53.958498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.010 [2024-07-12 11:28:53.958561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.010 qpair failed and we were unable to recover it. 00:24:28.010 [2024-07-12 11:28:53.958806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.010 [2024-07-12 11:28:53.958914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.010 qpair failed and we were unable to recover it. 00:24:28.010 [2024-07-12 11:28:53.959194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.010 [2024-07-12 11:28:53.959258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.010 qpair failed and we were unable to recover it. 00:24:28.010 [2024-07-12 11:28:53.959504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.010 [2024-07-12 11:28:53.959565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.010 qpair failed and we were unable to recover it. 00:24:28.010 [2024-07-12 11:28:53.959852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.010 [2024-07-12 11:28:53.959939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.010 qpair failed and we were unable to recover it. 00:24:28.010 [2024-07-12 11:28:53.960185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.010 [2024-07-12 11:28:53.960249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.010 qpair failed and we were unable to recover it. 00:24:28.010 [2024-07-12 11:28:53.960537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.010 [2024-07-12 11:28:53.960598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.010 qpair failed and we were unable to recover it. 00:24:28.010 [2024-07-12 11:28:53.960902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.010 [2024-07-12 11:28:53.960969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.010 qpair failed and we were unable to recover it. 00:24:28.010 [2024-07-12 11:28:53.961265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.010 [2024-07-12 11:28:53.961329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.010 qpair failed and we were unable to recover it. 00:24:28.010 [2024-07-12 11:28:53.961623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.010 [2024-07-12 11:28:53.961686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.010 qpair failed and we were unable to recover it. 00:24:28.010 [2024-07-12 11:28:53.962010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.010 [2024-07-12 11:28:53.962077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.010 qpair failed and we were unable to recover it. 00:24:28.010 [2024-07-12 11:28:53.962359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.010 [2024-07-12 11:28:53.962427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.010 qpair failed and we were unable to recover it. 00:24:28.010 [2024-07-12 11:28:53.962667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.010 [2024-07-12 11:28:53.962730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.010 qpair failed and we were unable to recover it. 00:24:28.010 [2024-07-12 11:28:53.962995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.010 [2024-07-12 11:28:53.963059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.010 qpair failed and we were unable to recover it. 00:24:28.010 [2024-07-12 11:28:53.963277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.010 [2024-07-12 11:28:53.963339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.010 qpair failed and we were unable to recover it. 00:24:28.010 [2024-07-12 11:28:53.963630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.010 [2024-07-12 11:28:53.963693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.010 qpair failed and we were unable to recover it. 00:24:28.010 [2024-07-12 11:28:53.963966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.010 [2024-07-12 11:28:53.964031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.010 qpair failed and we were unable to recover it. 00:24:28.010 [2024-07-12 11:28:53.964257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.010 [2024-07-12 11:28:53.964320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.010 qpair failed and we were unable to recover it. 00:24:28.010 [2024-07-12 11:28:53.964609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.010 [2024-07-12 11:28:53.964673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.010 qpair failed and we were unable to recover it. 00:24:28.010 [2024-07-12 11:28:53.964942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.010 [2024-07-12 11:28:53.964986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.010 qpair failed and we were unable to recover it. 00:24:28.010 [2024-07-12 11:28:53.965136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.010 [2024-07-12 11:28:53.965215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.010 qpair failed and we were unable to recover it. 00:24:28.010 [2024-07-12 11:28:53.965473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.010 [2024-07-12 11:28:53.965515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.010 qpair failed and we were unable to recover it. 00:24:28.010 [2024-07-12 11:28:53.965664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.010 [2024-07-12 11:28:53.965705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.010 qpair failed and we were unable to recover it. 00:24:28.010 [2024-07-12 11:28:53.966009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.010 [2024-07-12 11:28:53.966083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.010 qpair failed and we were unable to recover it. 00:24:28.010 [2024-07-12 11:28:53.966347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.010 [2024-07-12 11:28:53.966409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.010 qpair failed and we were unable to recover it. 00:24:28.010 [2024-07-12 11:28:53.966659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.010 [2024-07-12 11:28:53.966723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.010 qpair failed and we were unable to recover it. 00:24:28.010 [2024-07-12 11:28:53.968384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.010 [2024-07-12 11:28:53.968416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.010 qpair failed and we were unable to recover it. 00:24:28.010 [2024-07-12 11:28:53.968540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.010 [2024-07-12 11:28:53.968568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.010 qpair failed and we were unable to recover it. 00:24:28.010 [2024-07-12 11:28:53.968668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.010 [2024-07-12 11:28:53.968697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.010 qpair failed and we were unable to recover it. 00:24:28.010 [2024-07-12 11:28:53.968799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.010 [2024-07-12 11:28:53.968826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.010 qpair failed and we were unable to recover it. 00:24:28.010 [2024-07-12 11:28:53.968997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.010 [2024-07-12 11:28:53.969053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.010 qpair failed and we were unable to recover it. 00:24:28.010 [2024-07-12 11:28:53.969214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.010 [2024-07-12 11:28:53.969265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.011 qpair failed and we were unable to recover it. 00:24:28.011 [2024-07-12 11:28:53.969427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.011 [2024-07-12 11:28:53.969499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.011 qpair failed and we were unable to recover it. 00:24:28.011 [2024-07-12 11:28:53.969614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.011 [2024-07-12 11:28:53.969640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.011 qpair failed and we were unable to recover it. 00:24:28.011 [2024-07-12 11:28:53.969760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.011 [2024-07-12 11:28:53.969787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.011 qpair failed and we were unable to recover it. 00:24:28.011 [2024-07-12 11:28:53.969916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.011 [2024-07-12 11:28:53.969945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.011 qpair failed and we were unable to recover it. 00:24:28.011 [2024-07-12 11:28:53.970064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.011 [2024-07-12 11:28:53.970092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.011 qpair failed and we were unable to recover it. 00:24:28.011 [2024-07-12 11:28:53.970196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.011 [2024-07-12 11:28:53.970224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.011 qpair failed and we were unable to recover it. 00:24:28.011 [2024-07-12 11:28:53.970354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.011 [2024-07-12 11:28:53.970381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.011 qpair failed and we were unable to recover it. 00:24:28.011 [2024-07-12 11:28:53.970473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.011 [2024-07-12 11:28:53.970501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.011 qpair failed and we were unable to recover it. 00:24:28.011 [2024-07-12 11:28:53.970654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.011 [2024-07-12 11:28:53.970682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.011 qpair failed and we were unable to recover it. 00:24:28.011 [2024-07-12 11:28:53.970769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.011 [2024-07-12 11:28:53.970796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.011 qpair failed and we were unable to recover it. 00:24:28.011 [2024-07-12 11:28:53.970903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.011 [2024-07-12 11:28:53.970931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.011 qpair failed and we were unable to recover it. 00:24:28.011 [2024-07-12 11:28:53.971028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.011 [2024-07-12 11:28:53.971056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.011 qpair failed and we were unable to recover it. 00:24:28.011 [2024-07-12 11:28:53.971207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.011 [2024-07-12 11:28:53.971234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.011 qpair failed and we were unable to recover it. 00:24:28.011 [2024-07-12 11:28:53.971351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.011 [2024-07-12 11:28:53.971379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.011 qpair failed and we were unable to recover it. 00:24:28.011 [2024-07-12 11:28:53.971471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.011 [2024-07-12 11:28:53.971499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.011 qpair failed and we were unable to recover it. 00:24:28.011 [2024-07-12 11:28:53.971596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.011 [2024-07-12 11:28:53.971624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.011 qpair failed and we were unable to recover it. 00:24:28.011 [2024-07-12 11:28:53.971771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.011 [2024-07-12 11:28:53.971800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.011 qpair failed and we were unable to recover it. 00:24:28.011 [2024-07-12 11:28:53.971894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.011 [2024-07-12 11:28:53.971922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.011 qpair failed and we were unable to recover it. 00:24:28.011 [2024-07-12 11:28:53.972044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.011 [2024-07-12 11:28:53.972072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.011 qpair failed and we were unable to recover it. 00:24:28.011 [2024-07-12 11:28:53.972212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.011 [2024-07-12 11:28:53.972239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.011 qpair failed and we were unable to recover it. 00:24:28.011 [2024-07-12 11:28:53.972344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.011 [2024-07-12 11:28:53.972370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.011 qpair failed and we were unable to recover it. 00:24:28.011 [2024-07-12 11:28:53.972464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.011 [2024-07-12 11:28:53.972491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.011 qpair failed and we were unable to recover it. 00:24:28.011 [2024-07-12 11:28:53.972609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.011 [2024-07-12 11:28:53.972636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.011 qpair failed and we were unable to recover it. 00:24:28.011 [2024-07-12 11:28:53.972730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.011 [2024-07-12 11:28:53.972757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.011 qpair failed and we were unable to recover it. 00:24:28.011 [2024-07-12 11:28:53.972894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.011 [2024-07-12 11:28:53.972923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.011 qpair failed and we were unable to recover it. 00:24:28.011 [2024-07-12 11:28:53.973019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.011 [2024-07-12 11:28:53.973046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.011 qpair failed and we were unable to recover it. 00:24:28.011 [2024-07-12 11:28:53.973132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.011 [2024-07-12 11:28:53.973159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.011 qpair failed and we were unable to recover it. 00:24:28.011 [2024-07-12 11:28:53.973271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.011 [2024-07-12 11:28:53.973298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.011 qpair failed and we were unable to recover it. 00:24:28.011 [2024-07-12 11:28:53.973388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.011 [2024-07-12 11:28:53.973414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.011 qpair failed and we were unable to recover it. 00:24:28.011 [2024-07-12 11:28:53.973504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.011 [2024-07-12 11:28:53.973531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.011 qpair failed and we were unable to recover it. 00:24:28.011 [2024-07-12 11:28:53.973657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.011 [2024-07-12 11:28:53.973684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.011 qpair failed and we were unable to recover it. 00:24:28.012 [2024-07-12 11:28:53.973802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.012 [2024-07-12 11:28:53.973833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.012 qpair failed and we were unable to recover it. 00:24:28.012 [2024-07-12 11:28:53.973960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.012 [2024-07-12 11:28:53.973988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.012 qpair failed and we were unable to recover it. 00:24:28.012 [2024-07-12 11:28:53.974099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.012 [2024-07-12 11:28:53.974126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.012 qpair failed and we were unable to recover it. 00:24:28.012 [2024-07-12 11:28:53.974237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.012 [2024-07-12 11:28:53.974264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.012 qpair failed and we were unable to recover it. 00:24:28.012 [2024-07-12 11:28:53.974420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.012 [2024-07-12 11:28:53.974447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.012 qpair failed and we were unable to recover it. 00:24:28.012 [2024-07-12 11:28:53.974537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.012 [2024-07-12 11:28:53.974564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.012 qpair failed and we were unable to recover it. 00:24:28.012 [2024-07-12 11:28:53.974715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.012 [2024-07-12 11:28:53.974742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.012 qpair failed and we were unable to recover it. 00:24:28.012 [2024-07-12 11:28:53.974831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.012 [2024-07-12 11:28:53.974859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.012 qpair failed and we were unable to recover it. 00:24:28.012 [2024-07-12 11:28:53.974984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.012 [2024-07-12 11:28:53.975011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.012 qpair failed and we were unable to recover it. 00:24:28.012 [2024-07-12 11:28:53.975153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.012 [2024-07-12 11:28:53.975179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.012 qpair failed and we were unable to recover it. 00:24:28.012 [2024-07-12 11:28:53.975386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.012 [2024-07-12 11:28:53.975451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.012 qpair failed and we were unable to recover it. 00:24:28.012 [2024-07-12 11:28:53.975567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.012 [2024-07-12 11:28:53.975594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.012 qpair failed and we were unable to recover it. 00:24:28.012 [2024-07-12 11:28:53.975698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.012 [2024-07-12 11:28:53.975726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.012 qpair failed and we were unable to recover it. 00:24:28.012 [2024-07-12 11:28:53.975849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.012 [2024-07-12 11:28:53.975906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.012 qpair failed and we were unable to recover it. 00:24:28.012 [2024-07-12 11:28:53.976065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.012 [2024-07-12 11:28:53.976115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.012 qpair failed and we were unable to recover it. 00:24:28.012 [2024-07-12 11:28:53.976284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.012 [2024-07-12 11:28:53.976335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.012 qpair failed and we were unable to recover it. 00:24:28.012 [2024-07-12 11:28:53.976448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.012 [2024-07-12 11:28:53.976474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.012 qpair failed and we were unable to recover it. 00:24:28.012 [2024-07-12 11:28:53.976593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.012 [2024-07-12 11:28:53.976619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.012 qpair failed and we were unable to recover it. 00:24:28.012 [2024-07-12 11:28:53.976712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.012 [2024-07-12 11:28:53.976740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.012 qpair failed and we were unable to recover it. 00:24:28.012 [2024-07-12 11:28:53.976837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.012 [2024-07-12 11:28:53.976874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.012 qpair failed and we were unable to recover it. 00:24:28.012 [2024-07-12 11:28:53.977008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.012 [2024-07-12 11:28:53.977035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.012 qpair failed and we were unable to recover it. 00:24:28.012 [2024-07-12 11:28:53.977153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.012 [2024-07-12 11:28:53.977185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.012 qpair failed and we were unable to recover it. 00:24:28.012 [2024-07-12 11:28:53.977289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.012 [2024-07-12 11:28:53.977316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.012 qpair failed and we were unable to recover it. 00:24:28.012 [2024-07-12 11:28:53.977410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.012 [2024-07-12 11:28:53.977437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.012 qpair failed and we were unable to recover it. 00:24:28.012 [2024-07-12 11:28:53.977565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.012 [2024-07-12 11:28:53.977592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.012 qpair failed and we were unable to recover it. 00:24:28.012 [2024-07-12 11:28:53.977705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.012 [2024-07-12 11:28:53.977731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.012 qpair failed and we were unable to recover it. 00:24:28.012 [2024-07-12 11:28:53.977827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.012 [2024-07-12 11:28:53.977855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.012 qpair failed and we were unable to recover it. 00:24:28.012 [2024-07-12 11:28:53.978004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.012 [2024-07-12 11:28:53.978031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.012 qpair failed and we were unable to recover it. 00:24:28.012 [2024-07-12 11:28:53.978147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.012 [2024-07-12 11:28:53.978175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.012 qpair failed and we were unable to recover it. 00:24:28.012 [2024-07-12 11:28:53.978287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.012 [2024-07-12 11:28:53.978314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.012 qpair failed and we were unable to recover it. 00:24:28.012 [2024-07-12 11:28:53.978401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.012 [2024-07-12 11:28:53.978429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.012 qpair failed and we were unable to recover it. 00:24:28.012 [2024-07-12 11:28:53.978551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.012 [2024-07-12 11:28:53.978579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.012 qpair failed and we were unable to recover it. 00:24:28.012 [2024-07-12 11:28:53.978699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.012 [2024-07-12 11:28:53.978726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.012 qpair failed and we were unable to recover it. 00:24:28.012 [2024-07-12 11:28:53.978843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.012 [2024-07-12 11:28:53.978879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.012 qpair failed and we were unable to recover it. 00:24:28.012 [2024-07-12 11:28:53.978976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.012 [2024-07-12 11:28:53.979005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.012 qpair failed and we were unable to recover it. 00:24:28.012 [2024-07-12 11:28:53.979150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.012 [2024-07-12 11:28:53.979177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.012 qpair failed and we were unable to recover it. 00:24:28.012 [2024-07-12 11:28:53.979336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.013 [2024-07-12 11:28:53.979363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.013 qpair failed and we were unable to recover it. 00:24:28.013 [2024-07-12 11:28:53.979451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.013 [2024-07-12 11:28:53.979479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.013 qpair failed and we were unable to recover it. 00:24:28.013 [2024-07-12 11:28:53.979600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.013 [2024-07-12 11:28:53.979626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.013 qpair failed and we were unable to recover it. 00:24:28.013 [2024-07-12 11:28:53.979747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.013 [2024-07-12 11:28:53.979774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.013 qpair failed and we were unable to recover it. 00:24:28.013 [2024-07-12 11:28:53.979901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.013 [2024-07-12 11:28:53.979934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.013 qpair failed and we were unable to recover it. 00:24:28.013 [2024-07-12 11:28:53.980033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.013 [2024-07-12 11:28:53.980060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.013 qpair failed and we were unable to recover it. 00:24:28.013 [2024-07-12 11:28:53.980195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.013 [2024-07-12 11:28:53.980222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.013 qpair failed and we were unable to recover it. 00:24:28.013 [2024-07-12 11:28:53.980371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.013 [2024-07-12 11:28:53.980407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.013 qpair failed and we were unable to recover it. 00:24:28.013 [2024-07-12 11:28:53.980508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.013 [2024-07-12 11:28:53.980536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.013 qpair failed and we were unable to recover it. 00:24:28.013 [2024-07-12 11:28:53.980683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.013 [2024-07-12 11:28:53.980711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.013 qpair failed and we were unable to recover it. 00:24:28.013 [2024-07-12 11:28:53.980829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.013 [2024-07-12 11:28:53.980855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.013 qpair failed and we were unable to recover it. 00:24:28.013 [2024-07-12 11:28:53.981025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.013 [2024-07-12 11:28:53.981079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.013 qpair failed and we were unable to recover it. 00:24:28.013 [2024-07-12 11:28:53.981257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.013 [2024-07-12 11:28:53.981328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.013 qpair failed and we were unable to recover it. 00:24:28.013 [2024-07-12 11:28:53.981559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.013 [2024-07-12 11:28:53.981621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.013 qpair failed and we were unable to recover it. 00:24:28.013 [2024-07-12 11:28:53.981710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.013 [2024-07-12 11:28:53.981737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.013 qpair failed and we were unable to recover it. 00:24:28.013 [2024-07-12 11:28:53.981859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.013 [2024-07-12 11:28:53.981896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.013 qpair failed and we were unable to recover it. 00:24:28.013 [2024-07-12 11:28:53.982125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.013 [2024-07-12 11:28:53.982176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.013 qpair failed and we were unable to recover it. 00:24:28.013 [2024-07-12 11:28:53.982323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.013 [2024-07-12 11:28:53.982372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.013 qpair failed and we were unable to recover it. 00:24:28.013 [2024-07-12 11:28:53.982474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.013 [2024-07-12 11:28:53.982502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.013 qpair failed and we were unable to recover it. 00:24:28.013 [2024-07-12 11:28:53.982587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.013 [2024-07-12 11:28:53.982615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.013 qpair failed and we were unable to recover it. 00:24:28.013 [2024-07-12 11:28:53.982737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.013 [2024-07-12 11:28:53.982764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.013 qpair failed and we were unable to recover it. 00:24:28.013 [2024-07-12 11:28:53.982861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.013 [2024-07-12 11:28:53.982903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.013 qpair failed and we were unable to recover it. 00:24:28.013 [2024-07-12 11:28:53.983030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.013 [2024-07-12 11:28:53.983057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.013 qpair failed and we were unable to recover it. 00:24:28.013 [2024-07-12 11:28:53.983143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.013 [2024-07-12 11:28:53.983170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.013 qpair failed and we were unable to recover it. 00:24:28.013 [2024-07-12 11:28:53.983259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.013 [2024-07-12 11:28:53.983286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.013 qpair failed and we were unable to recover it. 00:24:28.013 [2024-07-12 11:28:53.983407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.013 [2024-07-12 11:28:53.983433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.013 qpair failed and we were unable to recover it. 00:24:28.013 [2024-07-12 11:28:53.983566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.013 [2024-07-12 11:28:53.983592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.013 qpair failed and we were unable to recover it. 00:24:28.013 [2024-07-12 11:28:53.983675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.013 [2024-07-12 11:28:53.983701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.013 qpair failed and we were unable to recover it. 00:24:28.013 [2024-07-12 11:28:53.983790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.013 [2024-07-12 11:28:53.983816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.013 qpair failed and we were unable to recover it. 00:24:28.013 [2024-07-12 11:28:53.983948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.013 [2024-07-12 11:28:53.983975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.013 qpair failed and we were unable to recover it. 00:24:28.013 [2024-07-12 11:28:53.984073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.013 [2024-07-12 11:28:53.984100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.013 qpair failed and we were unable to recover it. 00:24:28.013 [2024-07-12 11:28:53.984226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.013 [2024-07-12 11:28:53.984255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.013 qpair failed and we were unable to recover it. 00:24:28.013 [2024-07-12 11:28:53.984388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.013 [2024-07-12 11:28:53.984415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.013 qpair failed and we were unable to recover it. 00:24:28.013 [2024-07-12 11:28:53.984499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.013 [2024-07-12 11:28:53.984525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.013 qpair failed and we were unable to recover it. 00:24:28.013 [2024-07-12 11:28:53.984647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.013 [2024-07-12 11:28:53.984674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.013 qpair failed and we were unable to recover it. 00:24:28.013 [2024-07-12 11:28:53.984760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.013 [2024-07-12 11:28:53.984795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.013 qpair failed and we were unable to recover it. 00:24:28.013 [2024-07-12 11:28:53.984916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.014 [2024-07-12 11:28:53.984944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.014 qpair failed and we were unable to recover it. 00:24:28.014 [2024-07-12 11:28:53.985043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.014 [2024-07-12 11:28:53.985069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.014 qpair failed and we were unable to recover it. 00:24:28.014 [2024-07-12 11:28:53.985202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.014 [2024-07-12 11:28:53.985229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.014 qpair failed and we were unable to recover it. 00:24:28.014 [2024-07-12 11:28:53.985353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.014 [2024-07-12 11:28:53.985380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.014 qpair failed and we were unable to recover it. 00:24:28.014 [2024-07-12 11:28:53.985470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.014 [2024-07-12 11:28:53.985497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.014 qpair failed and we were unable to recover it. 00:24:28.014 [2024-07-12 11:28:53.985613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.014 [2024-07-12 11:28:53.985640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.014 qpair failed and we were unable to recover it. 00:24:28.014 [2024-07-12 11:28:53.985735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.014 [2024-07-12 11:28:53.985763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.014 qpair failed and we were unable to recover it. 00:24:28.014 [2024-07-12 11:28:53.985916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.014 [2024-07-12 11:28:53.985945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.014 qpair failed and we were unable to recover it. 00:24:28.014 [2024-07-12 11:28:53.986047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.014 [2024-07-12 11:28:53.986078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.014 qpair failed and we were unable to recover it. 00:24:28.014 [2024-07-12 11:28:53.986165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.014 [2024-07-12 11:28:53.986195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.014 qpair failed and we were unable to recover it. 00:24:28.014 [2024-07-12 11:28:53.986278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.014 [2024-07-12 11:28:53.986306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.014 qpair failed and we were unable to recover it. 00:24:28.014 [2024-07-12 11:28:53.986419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.014 [2024-07-12 11:28:53.986447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.014 qpair failed and we were unable to recover it. 00:24:28.014 [2024-07-12 11:28:53.986537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.014 [2024-07-12 11:28:53.986564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.014 qpair failed and we were unable to recover it. 00:24:28.014 [2024-07-12 11:28:53.986729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.014 [2024-07-12 11:28:53.986756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.014 qpair failed and we were unable to recover it. 00:24:28.014 [2024-07-12 11:28:53.986888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.014 [2024-07-12 11:28:53.986916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.014 qpair failed and we were unable to recover it. 00:24:28.014 [2024-07-12 11:28:53.987039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.014 [2024-07-12 11:28:53.987066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.014 qpair failed and we were unable to recover it. 00:24:28.014 [2024-07-12 11:28:53.987189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.014 [2024-07-12 11:28:53.987217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.014 qpair failed and we were unable to recover it. 00:24:28.014 [2024-07-12 11:28:53.987317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.014 [2024-07-12 11:28:53.987351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.014 qpair failed and we were unable to recover it. 00:24:28.014 [2024-07-12 11:28:53.987472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.014 [2024-07-12 11:28:53.987499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.014 qpair failed and we were unable to recover it. 00:24:28.014 [2024-07-12 11:28:53.987622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.014 [2024-07-12 11:28:53.987649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.014 qpair failed and we were unable to recover it. 00:24:28.014 [2024-07-12 11:28:53.987792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.014 [2024-07-12 11:28:53.987819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.014 qpair failed and we were unable to recover it. 00:24:28.014 [2024-07-12 11:28:53.987924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.014 [2024-07-12 11:28:53.987952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.014 qpair failed and we were unable to recover it. 00:24:28.014 [2024-07-12 11:28:53.988050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.014 [2024-07-12 11:28:53.988077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.014 qpair failed and we were unable to recover it. 00:24:28.014 [2024-07-12 11:28:53.988169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.014 [2024-07-12 11:28:53.988196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.014 qpair failed and we were unable to recover it. 00:24:28.014 [2024-07-12 11:28:53.988287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.014 [2024-07-12 11:28:53.988314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.014 qpair failed and we were unable to recover it. 00:24:28.014 [2024-07-12 11:28:53.988445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.014 [2024-07-12 11:28:53.988472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.014 qpair failed and we were unable to recover it. 00:24:28.014 [2024-07-12 11:28:53.988559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.014 [2024-07-12 11:28:53.988585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.014 qpair failed and we were unable to recover it. 00:24:28.014 [2024-07-12 11:28:53.988669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.014 [2024-07-12 11:28:53.988696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.014 qpair failed and we were unable to recover it. 00:24:28.014 [2024-07-12 11:28:53.988843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.014 [2024-07-12 11:28:53.988889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.014 qpair failed and we were unable to recover it. 00:24:28.014 [2024-07-12 11:28:53.989009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.014 [2024-07-12 11:28:53.989037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.014 qpair failed and we were unable to recover it. 00:24:28.014 [2024-07-12 11:28:53.989152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.014 [2024-07-12 11:28:53.989179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.014 qpair failed and we were unable to recover it. 00:24:28.014 [2024-07-12 11:28:53.989298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.014 [2024-07-12 11:28:53.989325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.014 qpair failed and we were unable to recover it. 00:24:28.014 [2024-07-12 11:28:53.989446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.014 [2024-07-12 11:28:53.989473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.014 qpair failed and we were unable to recover it. 00:24:28.014 [2024-07-12 11:28:53.989567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.014 [2024-07-12 11:28:53.989594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.014 qpair failed and we were unable to recover it. 00:24:28.014 [2024-07-12 11:28:53.989688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.014 [2024-07-12 11:28:53.989726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.014 qpair failed and we were unable to recover it. 00:24:28.014 [2024-07-12 11:28:53.989826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.014 [2024-07-12 11:28:53.989853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.014 qpair failed and we were unable to recover it. 00:24:28.014 [2024-07-12 11:28:53.989983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.015 [2024-07-12 11:28:53.990011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.015 qpair failed and we were unable to recover it. 00:24:28.015 [2024-07-12 11:28:53.990115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.015 [2024-07-12 11:28:53.990142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.015 qpair failed and we were unable to recover it. 00:24:28.015 [2024-07-12 11:28:53.990224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.015 [2024-07-12 11:28:53.990252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.015 qpair failed and we were unable to recover it. 00:24:28.015 [2024-07-12 11:28:53.990373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.015 [2024-07-12 11:28:53.990401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.015 qpair failed and we were unable to recover it. 00:24:28.015 [2024-07-12 11:28:53.990491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.015 [2024-07-12 11:28:53.990517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.015 qpair failed and we were unable to recover it. 00:24:28.015 [2024-07-12 11:28:53.990606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.015 [2024-07-12 11:28:53.990632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.015 qpair failed and we were unable to recover it. 00:24:28.015 [2024-07-12 11:28:53.990725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.015 [2024-07-12 11:28:53.990751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.015 qpair failed and we were unable to recover it. 00:24:28.015 [2024-07-12 11:28:53.990896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.015 [2024-07-12 11:28:53.990925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.015 qpair failed and we were unable to recover it. 00:24:28.015 [2024-07-12 11:28:53.991038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.015 [2024-07-12 11:28:53.991065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.015 qpair failed and we were unable to recover it. 00:24:28.015 [2024-07-12 11:28:53.991180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.015 [2024-07-12 11:28:53.991207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.015 qpair failed and we were unable to recover it. 00:24:28.015 [2024-07-12 11:28:53.991362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.015 [2024-07-12 11:28:53.991389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.015 qpair failed and we were unable to recover it. 00:24:28.015 [2024-07-12 11:28:53.991509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.015 [2024-07-12 11:28:53.991536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.015 qpair failed and we were unable to recover it. 00:24:28.015 [2024-07-12 11:28:53.991658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.015 [2024-07-12 11:28:53.991688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.015 qpair failed and we were unable to recover it. 00:24:28.015 [2024-07-12 11:28:53.991817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.015 [2024-07-12 11:28:53.991844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.015 qpair failed and we were unable to recover it. 00:24:28.015 [2024-07-12 11:28:53.991972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.015 [2024-07-12 11:28:53.991999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.015 qpair failed and we were unable to recover it. 00:24:28.015 [2024-07-12 11:28:53.992100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.015 [2024-07-12 11:28:53.992126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.015 qpair failed and we were unable to recover it. 00:24:28.015 [2024-07-12 11:28:53.992281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.015 [2024-07-12 11:28:53.992308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.015 qpair failed and we were unable to recover it. 00:24:28.015 [2024-07-12 11:28:53.992403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.015 [2024-07-12 11:28:53.992430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.015 qpair failed and we were unable to recover it. 00:24:28.015 [2024-07-12 11:28:53.992523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.015 [2024-07-12 11:28:53.992550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.015 qpair failed and we were unable to recover it. 00:24:28.015 [2024-07-12 11:28:53.992694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.015 [2024-07-12 11:28:53.992721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.015 qpair failed and we were unable to recover it. 00:24:28.015 [2024-07-12 11:28:53.992809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.015 [2024-07-12 11:28:53.992836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.015 qpair failed and we were unable to recover it. 00:24:28.015 [2024-07-12 11:28:53.992973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.015 [2024-07-12 11:28:53.993001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.015 qpair failed and we were unable to recover it. 00:24:28.015 [2024-07-12 11:28:53.993088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.015 [2024-07-12 11:28:53.993115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.015 qpair failed and we were unable to recover it. 00:24:28.015 [2024-07-12 11:28:53.993261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.015 [2024-07-12 11:28:53.993289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.015 qpair failed and we were unable to recover it. 00:24:28.015 [2024-07-12 11:28:53.993433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.015 [2024-07-12 11:28:53.993467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.015 qpair failed and we were unable to recover it. 00:24:28.015 [2024-07-12 11:28:53.993593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.015 [2024-07-12 11:28:53.993620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.015 qpair failed and we were unable to recover it. 00:24:28.015 [2024-07-12 11:28:53.993743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.015 [2024-07-12 11:28:53.993770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.015 qpair failed and we were unable to recover it. 00:24:28.015 [2024-07-12 11:28:53.993855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.015 [2024-07-12 11:28:53.993897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.015 qpair failed and we were unable to recover it. 00:24:28.015 [2024-07-12 11:28:53.994067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.015 [2024-07-12 11:28:53.994119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.015 qpair failed and we were unable to recover it. 00:24:28.015 [2024-07-12 11:28:53.994304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.015 [2024-07-12 11:28:53.994361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.015 qpair failed and we were unable to recover it. 00:24:28.015 [2024-07-12 11:28:53.994505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.015 [2024-07-12 11:28:53.994532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.015 qpair failed and we were unable to recover it. 00:24:28.015 [2024-07-12 11:28:53.994651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.015 [2024-07-12 11:28:53.994678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.015 qpair failed and we were unable to recover it. 00:24:28.015 [2024-07-12 11:28:53.994822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.015 [2024-07-12 11:28:53.994849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.015 qpair failed and we were unable to recover it. 00:24:28.015 [2024-07-12 11:28:53.994986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.015 [2024-07-12 11:28:53.995039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.015 qpair failed and we were unable to recover it. 00:24:28.015 [2024-07-12 11:28:53.995187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.015 [2024-07-12 11:28:53.995246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.015 qpair failed and we were unable to recover it. 00:24:28.015 [2024-07-12 11:28:53.995381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.015 [2024-07-12 11:28:53.995429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.015 qpair failed and we were unable to recover it. 00:24:28.015 [2024-07-12 11:28:53.995549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.016 [2024-07-12 11:28:53.995576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.016 qpair failed and we were unable to recover it. 00:24:28.016 [2024-07-12 11:28:53.995697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.016 [2024-07-12 11:28:53.995724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.016 qpair failed and we were unable to recover it. 00:24:28.016 [2024-07-12 11:28:53.995880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.016 [2024-07-12 11:28:53.995908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.016 qpair failed and we were unable to recover it. 00:24:28.016 [2024-07-12 11:28:53.996000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.016 [2024-07-12 11:28:53.996027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.016 qpair failed and we were unable to recover it. 00:24:28.016 [2024-07-12 11:28:53.996149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.016 [2024-07-12 11:28:53.996176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.016 qpair failed and we were unable to recover it. 00:24:28.016 [2024-07-12 11:28:53.996298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.016 [2024-07-12 11:28:53.996325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.016 qpair failed and we were unable to recover it. 00:24:28.016 [2024-07-12 11:28:53.996449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.016 [2024-07-12 11:28:53.996477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.016 qpair failed and we were unable to recover it. 00:24:28.016 [2024-07-12 11:28:53.996565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.016 [2024-07-12 11:28:53.996593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.016 qpair failed and we were unable to recover it. 00:24:28.016 [2024-07-12 11:28:53.996718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.016 [2024-07-12 11:28:53.996745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.016 qpair failed and we were unable to recover it. 00:24:28.016 [2024-07-12 11:28:53.996877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.016 [2024-07-12 11:28:53.996905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.016 qpair failed and we were unable to recover it. 00:24:28.016 [2024-07-12 11:28:53.996990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.016 [2024-07-12 11:28:53.997016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.016 qpair failed and we were unable to recover it. 00:24:28.016 [2024-07-12 11:28:53.997128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.016 [2024-07-12 11:28:53.997155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.016 qpair failed and we were unable to recover it. 00:24:28.016 [2024-07-12 11:28:53.997267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.016 [2024-07-12 11:28:53.997294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.016 qpair failed and we were unable to recover it. 00:24:28.016 [2024-07-12 11:28:53.997393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.016 [2024-07-12 11:28:53.997420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.016 qpair failed and we were unable to recover it. 00:24:28.016 [2024-07-12 11:28:53.997544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.016 [2024-07-12 11:28:53.997570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.016 qpair failed and we were unable to recover it. 00:24:28.016 [2024-07-12 11:28:53.997690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.016 [2024-07-12 11:28:53.997717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.016 qpair failed and we were unable to recover it. 00:24:28.016 [2024-07-12 11:28:53.997843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.016 [2024-07-12 11:28:53.997883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.016 qpair failed and we were unable to recover it. 00:24:28.016 [2024-07-12 11:28:53.997981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.016 [2024-07-12 11:28:53.998008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.016 qpair failed and we were unable to recover it. 00:24:28.016 [2024-07-12 11:28:53.998153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.016 [2024-07-12 11:28:53.998181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.016 qpair failed and we were unable to recover it. 00:24:28.016 [2024-07-12 11:28:53.998281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.016 [2024-07-12 11:28:53.998308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.016 qpair failed and we were unable to recover it. 00:24:28.016 [2024-07-12 11:28:53.998407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.016 [2024-07-12 11:28:53.998435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.016 qpair failed and we were unable to recover it. 00:24:28.016 [2024-07-12 11:28:53.998560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.016 [2024-07-12 11:28:53.998586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.016 qpair failed and we were unable to recover it. 00:24:28.016 [2024-07-12 11:28:53.998675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.016 [2024-07-12 11:28:53.998702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.016 qpair failed and we were unable to recover it. 00:24:28.016 [2024-07-12 11:28:53.998785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.016 [2024-07-12 11:28:53.998813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.016 qpair failed and we were unable to recover it. 00:24:28.016 [2024-07-12 11:28:53.998915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.016 [2024-07-12 11:28:53.998943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.016 qpair failed and we were unable to recover it. 00:24:28.016 [2024-07-12 11:28:53.999065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.016 [2024-07-12 11:28:53.999091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.016 qpair failed and we were unable to recover it. 00:24:28.016 [2024-07-12 11:28:53.999215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.016 [2024-07-12 11:28:53.999242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.016 qpair failed and we were unable to recover it. 00:24:28.016 [2024-07-12 11:28:53.999342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.016 [2024-07-12 11:28:53.999369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.016 qpair failed and we were unable to recover it. 00:24:28.016 [2024-07-12 11:28:53.999489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.016 [2024-07-12 11:28:53.999517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.016 qpair failed and we were unable to recover it. 00:24:28.016 [2024-07-12 11:28:53.999599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.016 [2024-07-12 11:28:53.999627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.016 qpair failed and we were unable to recover it. 00:24:28.016 [2024-07-12 11:28:53.999783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.016 [2024-07-12 11:28:53.999810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.016 qpair failed and we were unable to recover it. 00:24:28.016 [2024-07-12 11:28:54.000019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.016 [2024-07-12 11:28:54.000079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.016 qpair failed and we were unable to recover it. 00:24:28.017 [2024-07-12 11:28:54.000301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.017 [2024-07-12 11:28:54.000352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.017 qpair failed and we were unable to recover it. 00:24:28.017 [2024-07-12 11:28:54.000466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.017 [2024-07-12 11:28:54.000493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.017 qpair failed and we were unable to recover it. 00:24:28.017 [2024-07-12 11:28:54.000579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.017 [2024-07-12 11:28:54.000606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.017 qpair failed and we were unable to recover it. 00:24:28.017 [2024-07-12 11:28:54.000730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.017 [2024-07-12 11:28:54.000756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.017 qpair failed and we were unable to recover it. 00:24:28.017 [2024-07-12 11:28:54.000904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.017 [2024-07-12 11:28:54.000932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.017 qpair failed and we were unable to recover it. 00:24:28.017 [2024-07-12 11:28:54.001021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.017 [2024-07-12 11:28:54.001049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.017 qpair failed and we were unable to recover it. 00:24:28.017 [2024-07-12 11:28:54.001177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.017 [2024-07-12 11:28:54.001229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.017 qpair failed and we were unable to recover it. 00:24:28.017 [2024-07-12 11:28:54.001318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.017 [2024-07-12 11:28:54.001345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.017 qpair failed and we were unable to recover it. 00:24:28.017 [2024-07-12 11:28:54.001428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.017 [2024-07-12 11:28:54.001456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.017 qpair failed and we were unable to recover it. 00:24:28.017 [2024-07-12 11:28:54.001575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.017 [2024-07-12 11:28:54.001603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.017 qpair failed and we were unable to recover it. 00:24:28.017 [2024-07-12 11:28:54.001760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.017 [2024-07-12 11:28:54.001787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.017 qpair failed and we were unable to recover it. 00:24:28.017 [2024-07-12 11:28:54.001940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.017 [2024-07-12 11:28:54.001968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.017 qpair failed and we were unable to recover it. 00:24:28.017 [2024-07-12 11:28:54.002090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.017 [2024-07-12 11:28:54.002117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.017 qpair failed and we were unable to recover it. 00:24:28.017 [2024-07-12 11:28:54.002235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.017 [2024-07-12 11:28:54.002263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.017 qpair failed and we were unable to recover it. 00:24:28.017 [2024-07-12 11:28:54.002389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.017 [2024-07-12 11:28:54.002416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.017 qpair failed and we were unable to recover it. 00:24:28.017 [2024-07-12 11:28:54.002567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.017 [2024-07-12 11:28:54.002594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.017 qpair failed and we were unable to recover it. 00:24:28.017 [2024-07-12 11:28:54.002684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.017 [2024-07-12 11:28:54.002711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.017 qpair failed and we were unable to recover it. 00:24:28.017 [2024-07-12 11:28:54.002876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.017 [2024-07-12 11:28:54.002904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.017 qpair failed and we were unable to recover it. 00:24:28.017 [2024-07-12 11:28:54.003023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.017 [2024-07-12 11:28:54.003051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.017 qpair failed and we were unable to recover it. 00:24:28.017 [2024-07-12 11:28:54.003139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.017 [2024-07-12 11:28:54.003166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.017 qpair failed and we were unable to recover it. 00:24:28.017 [2024-07-12 11:28:54.003289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.017 [2024-07-12 11:28:54.003316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.017 qpair failed and we were unable to recover it. 00:24:28.017 [2024-07-12 11:28:54.003442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.017 [2024-07-12 11:28:54.003470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.017 qpair failed and we were unable to recover it. 00:24:28.017 [2024-07-12 11:28:54.003594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.017 [2024-07-12 11:28:54.003621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.017 qpair failed and we were unable to recover it. 00:24:28.017 [2024-07-12 11:28:54.003751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.017 [2024-07-12 11:28:54.003778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.017 qpair failed and we were unable to recover it. 00:24:28.017 [2024-07-12 11:28:54.003898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.017 [2024-07-12 11:28:54.003930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.017 qpair failed and we were unable to recover it. 00:24:28.017 [2024-07-12 11:28:54.004087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.017 [2024-07-12 11:28:54.004114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.017 qpair failed and we were unable to recover it. 00:24:28.017 [2024-07-12 11:28:54.004239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.017 [2024-07-12 11:28:54.004267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.017 qpair failed and we were unable to recover it. 00:24:28.017 [2024-07-12 11:28:54.004365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.017 [2024-07-12 11:28:54.004392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.017 qpair failed and we were unable to recover it. 00:24:28.017 [2024-07-12 11:28:54.004520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.017 [2024-07-12 11:28:54.004547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.017 qpair failed and we were unable to recover it. 00:24:28.017 [2024-07-12 11:28:54.004668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.017 [2024-07-12 11:28:54.004696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.017 qpair failed and we were unable to recover it. 00:24:28.017 [2024-07-12 11:28:54.004840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.017 [2024-07-12 11:28:54.004875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.017 qpair failed and we were unable to recover it. 00:24:28.017 [2024-07-12 11:28:54.005021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.017 [2024-07-12 11:28:54.005072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.017 qpair failed and we were unable to recover it. 00:24:28.017 [2024-07-12 11:28:54.005237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.017 [2024-07-12 11:28:54.005296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.017 qpair failed and we were unable to recover it. 00:24:28.017 [2024-07-12 11:28:54.005455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.017 [2024-07-12 11:28:54.005506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.017 qpair failed and we were unable to recover it. 00:24:28.017 [2024-07-12 11:28:54.005631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.017 [2024-07-12 11:28:54.005659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.017 qpair failed and we were unable to recover it. 00:24:28.017 [2024-07-12 11:28:54.005794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.017 [2024-07-12 11:28:54.005821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.017 qpair failed and we were unable to recover it. 00:24:28.018 [2024-07-12 11:28:54.006010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.018 [2024-07-12 11:28:54.006067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.018 qpair failed and we were unable to recover it. 00:24:28.018 [2024-07-12 11:28:54.006157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.018 [2024-07-12 11:28:54.006184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.018 qpair failed and we were unable to recover it. 00:24:28.018 [2024-07-12 11:28:54.006319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.018 [2024-07-12 11:28:54.006347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.018 qpair failed and we were unable to recover it. 00:24:28.018 [2024-07-12 11:28:54.006478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.018 [2024-07-12 11:28:54.006505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.018 qpair failed and we were unable to recover it. 00:24:28.018 [2024-07-12 11:28:54.006619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.018 [2024-07-12 11:28:54.006646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.018 qpair failed and we were unable to recover it. 00:24:28.018 [2024-07-12 11:28:54.006764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.018 [2024-07-12 11:28:54.006791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.018 qpair failed and we were unable to recover it. 00:24:28.018 [2024-07-12 11:28:54.006922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.018 [2024-07-12 11:28:54.006951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.018 qpair failed and we were unable to recover it. 00:24:28.018 [2024-07-12 11:28:54.007048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.018 [2024-07-12 11:28:54.007075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.018 qpair failed and we were unable to recover it. 00:24:28.018 [2024-07-12 11:28:54.007174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.018 [2024-07-12 11:28:54.007200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.018 qpair failed and we were unable to recover it. 00:24:28.018 [2024-07-12 11:28:54.007302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.018 [2024-07-12 11:28:54.007329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.018 qpair failed and we were unable to recover it. 00:24:28.018 [2024-07-12 11:28:54.007416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.018 [2024-07-12 11:28:54.007442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.018 qpair failed and we were unable to recover it. 00:24:28.018 [2024-07-12 11:28:54.007571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.018 [2024-07-12 11:28:54.007598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.018 qpair failed and we were unable to recover it. 00:24:28.018 [2024-07-12 11:28:54.007688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.018 [2024-07-12 11:28:54.007714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.018 qpair failed and we were unable to recover it. 00:24:28.018 [2024-07-12 11:28:54.007804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.018 [2024-07-12 11:28:54.007831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.018 qpair failed and we were unable to recover it. 00:24:28.018 [2024-07-12 11:28:54.007923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.018 [2024-07-12 11:28:54.007951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.018 qpair failed and we were unable to recover it. 00:24:28.018 [2024-07-12 11:28:54.008073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.018 [2024-07-12 11:28:54.008100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.018 qpair failed and we were unable to recover it. 00:24:28.018 [2024-07-12 11:28:54.008201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.018 [2024-07-12 11:28:54.008229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.018 qpair failed and we were unable to recover it. 00:24:28.018 [2024-07-12 11:28:54.008350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.018 [2024-07-12 11:28:54.008377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.018 qpair failed and we were unable to recover it. 00:24:28.018 [2024-07-12 11:28:54.008526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.018 [2024-07-12 11:28:54.008553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.018 qpair failed and we were unable to recover it. 00:24:28.018 [2024-07-12 11:28:54.008643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.018 [2024-07-12 11:28:54.008669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.018 qpair failed and we were unable to recover it. 00:24:28.018 [2024-07-12 11:28:54.008790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.018 [2024-07-12 11:28:54.008822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.018 qpair failed and we were unable to recover it. 00:24:28.018 [2024-07-12 11:28:54.008985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.018 [2024-07-12 11:28:54.009013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.018 qpair failed and we were unable to recover it. 00:24:28.018 [2024-07-12 11:28:54.009107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.018 [2024-07-12 11:28:54.009133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.018 qpair failed and we were unable to recover it. 00:24:28.018 [2024-07-12 11:28:54.009262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.018 [2024-07-12 11:28:54.009296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.018 qpair failed and we were unable to recover it. 00:24:28.018 [2024-07-12 11:28:54.009385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.018 [2024-07-12 11:28:54.009412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.018 qpair failed and we were unable to recover it. 00:24:28.018 [2024-07-12 11:28:54.009503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.018 [2024-07-12 11:28:54.009530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.018 qpair failed and we were unable to recover it. 00:24:28.018 [2024-07-12 11:28:54.009653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.018 [2024-07-12 11:28:54.009680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.018 qpair failed and we were unable to recover it. 00:24:28.018 [2024-07-12 11:28:54.009797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.018 [2024-07-12 11:28:54.009824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.018 qpair failed and we were unable to recover it. 00:24:28.018 [2024-07-12 11:28:54.009948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.018 [2024-07-12 11:28:54.009980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.018 qpair failed and we were unable to recover it. 00:24:28.018 [2024-07-12 11:28:54.010082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.018 [2024-07-12 11:28:54.010109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.018 qpair failed and we were unable to recover it. 00:24:28.018 [2024-07-12 11:28:54.010201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.018 [2024-07-12 11:28:54.010228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.018 qpair failed and we were unable to recover it. 00:24:28.018 [2024-07-12 11:28:54.010334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.018 [2024-07-12 11:28:54.010360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.018 qpair failed and we were unable to recover it. 00:24:28.018 [2024-07-12 11:28:54.010486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.018 [2024-07-12 11:28:54.010513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.018 qpair failed and we were unable to recover it. 00:24:28.018 [2024-07-12 11:28:54.010632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.018 [2024-07-12 11:28:54.010660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.018 qpair failed and we were unable to recover it. 00:24:28.018 [2024-07-12 11:28:54.010755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.018 [2024-07-12 11:28:54.010783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.018 qpair failed and we were unable to recover it. 00:24:28.018 [2024-07-12 11:28:54.010935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.018 [2024-07-12 11:28:54.010964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.018 qpair failed and we were unable to recover it. 00:24:28.018 [2024-07-12 11:28:54.011112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.019 [2024-07-12 11:28:54.011139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.019 qpair failed and we were unable to recover it. 00:24:28.019 [2024-07-12 11:28:54.011253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.019 [2024-07-12 11:28:54.011280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.019 qpair failed and we were unable to recover it. 00:24:28.019 [2024-07-12 11:28:54.011397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.019 [2024-07-12 11:28:54.011431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.019 qpair failed and we were unable to recover it. 00:24:28.019 [2024-07-12 11:28:54.011584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.019 [2024-07-12 11:28:54.011611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.019 qpair failed and we were unable to recover it. 00:24:28.019 [2024-07-12 11:28:54.011731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.019 [2024-07-12 11:28:54.011758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.019 qpair failed and we were unable to recover it. 00:24:28.019 [2024-07-12 11:28:54.011850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.019 [2024-07-12 11:28:54.011887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.019 qpair failed and we were unable to recover it. 00:24:28.019 [2024-07-12 11:28:54.012008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.019 [2024-07-12 11:28:54.012061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.019 qpair failed and we were unable to recover it. 00:24:28.019 [2024-07-12 11:28:54.012260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.019 [2024-07-12 11:28:54.012312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.019 qpair failed and we were unable to recover it. 00:24:28.019 [2024-07-12 11:28:54.012437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.019 [2024-07-12 11:28:54.012463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.019 qpair failed and we were unable to recover it. 00:24:28.019 [2024-07-12 11:28:54.012551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.019 [2024-07-12 11:28:54.012578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.019 qpair failed and we were unable to recover it. 00:24:28.019 [2024-07-12 11:28:54.012700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.019 [2024-07-12 11:28:54.012727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.019 qpair failed and we were unable to recover it. 00:24:28.019 [2024-07-12 11:28:54.012877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.019 [2024-07-12 11:28:54.012905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.019 qpair failed and we were unable to recover it. 00:24:28.019 [2024-07-12 11:28:54.012988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.019 [2024-07-12 11:28:54.013015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.019 qpair failed and we were unable to recover it. 00:24:28.019 [2024-07-12 11:28:54.013099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.019 [2024-07-12 11:28:54.013125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.019 qpair failed and we were unable to recover it. 00:24:28.019 [2024-07-12 11:28:54.013315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.019 [2024-07-12 11:28:54.013365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.019 qpair failed and we were unable to recover it. 00:24:28.019 [2024-07-12 11:28:54.013492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.019 [2024-07-12 11:28:54.013519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.019 qpair failed and we were unable to recover it. 00:24:28.019 [2024-07-12 11:28:54.013639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.019 [2024-07-12 11:28:54.013665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.019 qpair failed and we were unable to recover it. 00:24:28.019 [2024-07-12 11:28:54.013813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.019 [2024-07-12 11:28:54.013841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.019 qpair failed and we were unable to recover it. 00:24:28.019 [2024-07-12 11:28:54.014057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.019 [2024-07-12 11:28:54.014133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.019 qpair failed and we were unable to recover it. 00:24:28.019 [2024-07-12 11:28:54.014369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.019 [2024-07-12 11:28:54.014419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.019 qpair failed and we were unable to recover it. 00:24:28.019 [2024-07-12 11:28:54.014621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.019 [2024-07-12 11:28:54.014669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.019 qpair failed and we were unable to recover it. 00:24:28.019 [2024-07-12 11:28:54.014827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.019 [2024-07-12 11:28:54.014888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.019 qpair failed and we were unable to recover it. 00:24:28.019 [2024-07-12 11:28:54.015021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.019 [2024-07-12 11:28:54.015063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.019 qpair failed and we were unable to recover it. 00:24:28.019 [2024-07-12 11:28:54.015208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.019 [2024-07-12 11:28:54.015250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.019 qpair failed and we were unable to recover it. 00:24:28.019 [2024-07-12 11:28:54.015453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.019 [2024-07-12 11:28:54.015514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.019 qpair failed and we were unable to recover it. 00:24:28.019 [2024-07-12 11:28:54.015694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.019 [2024-07-12 11:28:54.015753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.019 qpair failed and we were unable to recover it. 00:24:28.019 [2024-07-12 11:28:54.015938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.019 [2024-07-12 11:28:54.015967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.019 qpair failed and we were unable to recover it. 00:24:28.019 [2024-07-12 11:28:54.016096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.019 [2024-07-12 11:28:54.016145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.019 qpair failed and we were unable to recover it. 00:24:28.019 [2024-07-12 11:28:54.016317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.019 [2024-07-12 11:28:54.016364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.019 qpair failed and we were unable to recover it. 00:24:28.019 [2024-07-12 11:28:54.016567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.019 [2024-07-12 11:28:54.016619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.019 qpair failed and we were unable to recover it. 00:24:28.019 [2024-07-12 11:28:54.016763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.019 [2024-07-12 11:28:54.016790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.019 qpair failed and we were unable to recover it. 00:24:28.019 [2024-07-12 11:28:54.016912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.019 [2024-07-12 11:28:54.016939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.019 qpair failed and we were unable to recover it. 00:24:28.019 [2024-07-12 11:28:54.017092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.019 [2024-07-12 11:28:54.017151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.019 qpair failed and we were unable to recover it. 00:24:28.019 [2024-07-12 11:28:54.017309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.019 [2024-07-12 11:28:54.017360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.019 qpair failed and we were unable to recover it. 00:24:28.019 [2024-07-12 11:28:54.017555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.019 [2024-07-12 11:28:54.017610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.019 qpair failed and we were unable to recover it. 00:24:28.019 [2024-07-12 11:28:54.017696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.019 [2024-07-12 11:28:54.017723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.019 qpair failed and we were unable to recover it. 00:24:28.019 [2024-07-12 11:28:54.017812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.020 [2024-07-12 11:28:54.017839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.020 qpair failed and we were unable to recover it. 00:24:28.020 [2024-07-12 11:28:54.018003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.020 [2024-07-12 11:28:54.018054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.020 qpair failed and we were unable to recover it. 00:24:28.020 [2024-07-12 11:28:54.018169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.020 [2024-07-12 11:28:54.018222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.020 qpair failed and we were unable to recover it. 00:24:28.020 [2024-07-12 11:28:54.018376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.020 [2024-07-12 11:28:54.018427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.020 qpair failed and we were unable to recover it. 00:24:28.020 [2024-07-12 11:28:54.018551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.020 [2024-07-12 11:28:54.018578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.020 qpair failed and we were unable to recover it. 00:24:28.020 [2024-07-12 11:28:54.018659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.020 [2024-07-12 11:28:54.018686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.020 qpair failed and we were unable to recover it. 00:24:28.020 [2024-07-12 11:28:54.018780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.020 [2024-07-12 11:28:54.018807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.020 qpair failed and we were unable to recover it. 00:24:28.020 [2024-07-12 11:28:54.018912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.020 [2024-07-12 11:28:54.018941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.020 qpair failed and we were unable to recover it. 00:24:28.020 [2024-07-12 11:28:54.019021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.020 [2024-07-12 11:28:54.019048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.020 qpair failed and we were unable to recover it. 00:24:28.020 [2024-07-12 11:28:54.019141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.020 [2024-07-12 11:28:54.019168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.020 qpair failed and we were unable to recover it. 00:24:28.020 [2024-07-12 11:28:54.019270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.020 [2024-07-12 11:28:54.019296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.020 qpair failed and we were unable to recover it. 00:24:28.020 [2024-07-12 11:28:54.019413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.020 [2024-07-12 11:28:54.019440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.020 qpair failed and we were unable to recover it. 00:24:28.020 [2024-07-12 11:28:54.019589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.020 [2024-07-12 11:28:54.019616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.020 qpair failed and we were unable to recover it. 00:24:28.020 [2024-07-12 11:28:54.019731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.020 [2024-07-12 11:28:54.019759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.020 qpair failed and we were unable to recover it. 00:24:28.020 [2024-07-12 11:28:54.019884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.020 [2024-07-12 11:28:54.019913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.020 qpair failed and we were unable to recover it. 00:24:28.020 [2024-07-12 11:28:54.020042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.020 [2024-07-12 11:28:54.020069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.020 qpair failed and we were unable to recover it. 00:24:28.020 [2024-07-12 11:28:54.020155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.020 [2024-07-12 11:28:54.020182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.020 qpair failed and we were unable to recover it. 00:24:28.020 [2024-07-12 11:28:54.020270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.020 [2024-07-12 11:28:54.020297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.020 qpair failed and we were unable to recover it. 00:24:28.020 [2024-07-12 11:28:54.020441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.020 [2024-07-12 11:28:54.020467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.020 qpair failed and we were unable to recover it. 00:24:28.020 [2024-07-12 11:28:54.020589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.020 [2024-07-12 11:28:54.020617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.020 qpair failed and we were unable to recover it. 00:24:28.020 [2024-07-12 11:28:54.020748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.020 [2024-07-12 11:28:54.020775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.020 qpair failed and we were unable to recover it. 00:24:28.020 [2024-07-12 11:28:54.020926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.020 [2024-07-12 11:28:54.020954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.020 qpair failed and we were unable to recover it. 00:24:28.020 [2024-07-12 11:28:54.021064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.020 [2024-07-12 11:28:54.021091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.020 qpair failed and we were unable to recover it. 00:24:28.020 [2024-07-12 11:28:54.021236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.020 [2024-07-12 11:28:54.021267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.020 qpair failed and we were unable to recover it. 00:24:28.020 [2024-07-12 11:28:54.021380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.020 [2024-07-12 11:28:54.021407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.020 qpair failed and we were unable to recover it. 00:24:28.020 [2024-07-12 11:28:54.021528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.020 [2024-07-12 11:28:54.021555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.020 qpair failed and we were unable to recover it. 00:24:28.020 [2024-07-12 11:28:54.021674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.020 [2024-07-12 11:28:54.021701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.020 qpair failed and we were unable to recover it. 00:24:28.020 [2024-07-12 11:28:54.021815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.020 [2024-07-12 11:28:54.021842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.020 qpair failed and we were unable to recover it. 00:24:28.020 [2024-07-12 11:28:54.021965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.020 [2024-07-12 11:28:54.021993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.020 qpair failed and we were unable to recover it. 00:24:28.020 [2024-07-12 11:28:54.022108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.020 [2024-07-12 11:28:54.022135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.020 qpair failed and we were unable to recover it. 00:24:28.020 [2024-07-12 11:28:54.022251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.020 [2024-07-12 11:28:54.022277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.020 qpair failed and we were unable to recover it. 00:24:28.020 [2024-07-12 11:28:54.022392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.020 [2024-07-12 11:28:54.022419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.020 qpair failed and we were unable to recover it. 00:24:28.020 [2024-07-12 11:28:54.022569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.020 [2024-07-12 11:28:54.022596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.020 qpair failed and we were unable to recover it. 00:24:28.020 [2024-07-12 11:28:54.022715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.020 [2024-07-12 11:28:54.022742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.020 qpair failed and we were unable to recover it. 00:24:28.020 [2024-07-12 11:28:54.022856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.020 [2024-07-12 11:28:54.022892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.020 qpair failed and we were unable to recover it. 00:24:28.020 [2024-07-12 11:28:54.023006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.020 [2024-07-12 11:28:54.023032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.020 qpair failed and we were unable to recover it. 00:24:28.020 [2024-07-12 11:28:54.023151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.020 [2024-07-12 11:28:54.023179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.020 qpair failed and we were unable to recover it. 00:24:28.021 [2024-07-12 11:28:54.023289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.021 [2024-07-12 11:28:54.023317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.021 qpair failed and we were unable to recover it. 00:24:28.021 [2024-07-12 11:28:54.023435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.021 [2024-07-12 11:28:54.023463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.021 qpair failed and we were unable to recover it. 00:24:28.021 [2024-07-12 11:28:54.023549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.021 [2024-07-12 11:28:54.023577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.021 qpair failed and we were unable to recover it. 00:24:28.021 [2024-07-12 11:28:54.023723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.021 [2024-07-12 11:28:54.023749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.021 qpair failed and we were unable to recover it. 00:24:28.021 [2024-07-12 11:28:54.023833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.021 [2024-07-12 11:28:54.023861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.021 qpair failed and we were unable to recover it. 00:24:28.021 [2024-07-12 11:28:54.023993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.021 [2024-07-12 11:28:54.024020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.021 qpair failed and we were unable to recover it. 00:24:28.021 [2024-07-12 11:28:54.024102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.021 [2024-07-12 11:28:54.024129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.021 qpair failed and we were unable to recover it. 00:24:28.021 [2024-07-12 11:28:54.024253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.021 [2024-07-12 11:28:54.024280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.021 qpair failed and we were unable to recover it. 00:24:28.021 [2024-07-12 11:28:54.024398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.021 [2024-07-12 11:28:54.024425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.021 qpair failed and we were unable to recover it. 00:24:28.021 [2024-07-12 11:28:54.024514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.021 [2024-07-12 11:28:54.024540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.021 qpair failed and we were unable to recover it. 00:24:28.021 [2024-07-12 11:28:54.024665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.021 [2024-07-12 11:28:54.024692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.021 qpair failed and we were unable to recover it. 00:24:28.021 [2024-07-12 11:28:54.024811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.021 [2024-07-12 11:28:54.024838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.021 qpair failed and we were unable to recover it. 00:24:28.021 [2024-07-12 11:28:54.024970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.021 [2024-07-12 11:28:54.024998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.021 qpair failed and we were unable to recover it. 00:24:28.021 [2024-07-12 11:28:54.025094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.021 [2024-07-12 11:28:54.025121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.021 qpair failed and we were unable to recover it. 00:24:28.021 [2024-07-12 11:28:54.025212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.021 [2024-07-12 11:28:54.025239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.021 qpair failed and we were unable to recover it. 00:24:28.021 [2024-07-12 11:28:54.025324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.021 [2024-07-12 11:28:54.025351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.021 qpair failed and we were unable to recover it. 00:24:28.021 [2024-07-12 11:28:54.025443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.021 [2024-07-12 11:28:54.025470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.021 qpair failed and we were unable to recover it. 00:24:28.021 [2024-07-12 11:28:54.025557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.021 [2024-07-12 11:28:54.025585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.021 qpair failed and we were unable to recover it. 00:24:28.021 [2024-07-12 11:28:54.025709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.021 [2024-07-12 11:28:54.025736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.021 qpair failed and we were unable to recover it. 00:24:28.021 [2024-07-12 11:28:54.025844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.021 [2024-07-12 11:28:54.025879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.021 qpair failed and we were unable to recover it. 00:24:28.021 [2024-07-12 11:28:54.025997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.021 [2024-07-12 11:28:54.026025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.021 qpair failed and we were unable to recover it. 00:24:28.021 [2024-07-12 11:28:54.026141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.021 [2024-07-12 11:28:54.026168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.021 qpair failed and we were unable to recover it. 00:24:28.021 [2024-07-12 11:28:54.026281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.021 [2024-07-12 11:28:54.026308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.021 qpair failed and we were unable to recover it. 00:24:28.021 [2024-07-12 11:28:54.026391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.021 [2024-07-12 11:28:54.026418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.021 qpair failed and we were unable to recover it. 00:24:28.021 [2024-07-12 11:28:54.026517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.021 [2024-07-12 11:28:54.026545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.021 qpair failed and we were unable to recover it. 00:24:28.021 [2024-07-12 11:28:54.026661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.021 [2024-07-12 11:28:54.026688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.021 qpair failed and we were unable to recover it. 00:24:28.021 [2024-07-12 11:28:54.026786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.021 [2024-07-12 11:28:54.026817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.021 qpair failed and we were unable to recover it. 00:24:28.021 [2024-07-12 11:28:54.026955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.021 [2024-07-12 11:28:54.026983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.021 qpair failed and we were unable to recover it. 00:24:28.021 [2024-07-12 11:28:54.027099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.021 [2024-07-12 11:28:54.027126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.021 qpair failed and we were unable to recover it. 00:24:28.021 [2024-07-12 11:28:54.027244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.022 [2024-07-12 11:28:54.027271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.022 qpair failed and we were unable to recover it. 00:24:28.022 [2024-07-12 11:28:54.027393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.022 [2024-07-12 11:28:54.027419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.022 qpair failed and we were unable to recover it. 00:24:28.022 [2024-07-12 11:28:54.027566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.022 [2024-07-12 11:28:54.027593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.022 qpair failed and we were unable to recover it. 00:24:28.022 [2024-07-12 11:28:54.027713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.022 [2024-07-12 11:28:54.027740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.022 qpair failed and we were unable to recover it. 00:24:28.022 [2024-07-12 11:28:54.027864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.022 [2024-07-12 11:28:54.027900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.022 qpair failed and we were unable to recover it. 00:24:28.022 [2024-07-12 11:28:54.027998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.022 [2024-07-12 11:28:54.028025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.022 qpair failed and we were unable to recover it. 00:24:28.022 [2024-07-12 11:28:54.028172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.022 [2024-07-12 11:28:54.028199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.022 qpair failed and we were unable to recover it. 00:24:28.022 [2024-07-12 11:28:54.028288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.022 [2024-07-12 11:28:54.028314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.022 qpair failed and we were unable to recover it. 00:24:28.022 [2024-07-12 11:28:54.028410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.022 [2024-07-12 11:28:54.028437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.022 qpair failed and we were unable to recover it. 00:24:28.022 [2024-07-12 11:28:54.028547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.022 [2024-07-12 11:28:54.028574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.022 qpair failed and we were unable to recover it. 00:24:28.022 [2024-07-12 11:28:54.028695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.022 [2024-07-12 11:28:54.028722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.022 qpair failed and we were unable to recover it. 00:24:28.022 [2024-07-12 11:28:54.028842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.022 [2024-07-12 11:28:54.028876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.022 qpair failed and we were unable to recover it. 00:24:28.022 [2024-07-12 11:28:54.029003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.022 [2024-07-12 11:28:54.029030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.022 qpair failed and we were unable to recover it. 00:24:28.022 [2024-07-12 11:28:54.029121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.022 [2024-07-12 11:28:54.029148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.022 qpair failed and we were unable to recover it. 00:24:28.022 [2024-07-12 11:28:54.029245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.022 [2024-07-12 11:28:54.029272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.022 qpair failed and we were unable to recover it. 00:24:28.022 [2024-07-12 11:28:54.029391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.022 [2024-07-12 11:28:54.029418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.022 qpair failed and we were unable to recover it. 00:24:28.022 [2024-07-12 11:28:54.029543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.022 [2024-07-12 11:28:54.029570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.022 qpair failed and we were unable to recover it. 00:24:28.022 [2024-07-12 11:28:54.029697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.022 [2024-07-12 11:28:54.029724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.022 qpair failed and we were unable to recover it. 00:24:28.022 [2024-07-12 11:28:54.029848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.022 [2024-07-12 11:28:54.029881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.022 qpair failed and we were unable to recover it. 00:24:28.022 [2024-07-12 11:28:54.029971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.022 [2024-07-12 11:28:54.029997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.022 qpair failed and we were unable to recover it. 00:24:28.022 [2024-07-12 11:28:54.030086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.022 [2024-07-12 11:28:54.030113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.022 qpair failed and we were unable to recover it. 00:24:28.022 [2024-07-12 11:28:54.030211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.022 [2024-07-12 11:28:54.030239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.022 qpair failed and we were unable to recover it. 00:24:28.022 [2024-07-12 11:28:54.030360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.022 [2024-07-12 11:28:54.030387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.022 qpair failed and we were unable to recover it. 00:24:28.022 [2024-07-12 11:28:54.030470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.022 [2024-07-12 11:28:54.030497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.022 qpair failed and we were unable to recover it. 00:24:28.022 [2024-07-12 11:28:54.030622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.022 [2024-07-12 11:28:54.030649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.022 qpair failed and we were unable to recover it. 00:24:28.022 [2024-07-12 11:28:54.030773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.022 [2024-07-12 11:28:54.030800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.022 qpair failed and we were unable to recover it. 00:24:28.022 [2024-07-12 11:28:54.030942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.022 [2024-07-12 11:28:54.030970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.022 qpair failed and we were unable to recover it. 00:24:28.022 [2024-07-12 11:28:54.031100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.022 [2024-07-12 11:28:54.031152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.022 qpair failed and we were unable to recover it. 00:24:28.022 [2024-07-12 11:28:54.031298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.022 [2024-07-12 11:28:54.031325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.022 qpair failed and we were unable to recover it. 00:24:28.022 [2024-07-12 11:28:54.031446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.022 [2024-07-12 11:28:54.031473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.022 qpair failed and we were unable to recover it. 00:24:28.022 [2024-07-12 11:28:54.031596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.022 [2024-07-12 11:28:54.031623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.022 qpair failed and we were unable to recover it. 00:24:28.022 [2024-07-12 11:28:54.031740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.022 [2024-07-12 11:28:54.031767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.022 qpair failed and we were unable to recover it. 00:24:28.022 [2024-07-12 11:28:54.031886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.022 [2024-07-12 11:28:54.031914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.022 qpair failed and we were unable to recover it. 00:24:28.022 [2024-07-12 11:28:54.032006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.022 [2024-07-12 11:28:54.032033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.022 qpair failed and we were unable to recover it. 00:24:28.022 [2024-07-12 11:28:54.032133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.022 [2024-07-12 11:28:54.032160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.022 qpair failed and we were unable to recover it. 00:24:28.022 [2024-07-12 11:28:54.032259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.022 [2024-07-12 11:28:54.032288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.022 qpair failed and we were unable to recover it. 00:24:28.022 [2024-07-12 11:28:54.032377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.022 [2024-07-12 11:28:54.032405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.023 qpair failed and we were unable to recover it. 00:24:28.023 [2024-07-12 11:28:54.032530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.023 [2024-07-12 11:28:54.032561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.023 qpair failed and we were unable to recover it. 00:24:28.023 [2024-07-12 11:28:54.032681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.023 [2024-07-12 11:28:54.032707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.023 qpair failed and we were unable to recover it. 00:24:28.023 [2024-07-12 11:28:54.032801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.023 [2024-07-12 11:28:54.032828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.023 qpair failed and we were unable to recover it. 00:24:28.023 [2024-07-12 11:28:54.032949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.023 [2024-07-12 11:28:54.032977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.023 qpair failed and we were unable to recover it. 00:24:28.023 [2024-07-12 11:28:54.033070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.023 [2024-07-12 11:28:54.033097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.023 qpair failed and we were unable to recover it. 00:24:28.023 [2024-07-12 11:28:54.033217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.023 [2024-07-12 11:28:54.033244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.023 qpair failed and we were unable to recover it. 00:24:28.023 [2024-07-12 11:28:54.033363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.023 [2024-07-12 11:28:54.033390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.023 qpair failed and we were unable to recover it. 00:24:28.023 [2024-07-12 11:28:54.033475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.023 [2024-07-12 11:28:54.033502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.023 qpair failed and we were unable to recover it. 00:24:28.023 [2024-07-12 11:28:54.033593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.023 [2024-07-12 11:28:54.033620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.023 qpair failed and we were unable to recover it. 00:24:28.023 [2024-07-12 11:28:54.033766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.023 [2024-07-12 11:28:54.033793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.023 qpair failed and we were unable to recover it. 00:24:28.023 [2024-07-12 11:28:54.033914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.023 [2024-07-12 11:28:54.033942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.023 qpair failed and we were unable to recover it. 00:24:28.023 [2024-07-12 11:28:54.034031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.023 [2024-07-12 11:28:54.034058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.023 qpair failed and we were unable to recover it. 00:24:28.023 [2024-07-12 11:28:54.034184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.023 [2024-07-12 11:28:54.034211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.023 qpair failed and we were unable to recover it. 00:24:28.023 [2024-07-12 11:28:54.034324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.023 [2024-07-12 11:28:54.034351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.023 qpair failed and we were unable to recover it. 00:24:28.023 [2024-07-12 11:28:54.034450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.023 [2024-07-12 11:28:54.034478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.023 qpair failed and we were unable to recover it. 00:24:28.023 [2024-07-12 11:28:54.034598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.023 [2024-07-12 11:28:54.034625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.023 qpair failed and we were unable to recover it. 00:24:28.023 [2024-07-12 11:28:54.034718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.023 [2024-07-12 11:28:54.034744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.023 qpair failed and we were unable to recover it. 00:24:28.023 [2024-07-12 11:28:54.034880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.023 [2024-07-12 11:28:54.034908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.023 qpair failed and we were unable to recover it. 00:24:28.023 [2024-07-12 11:28:54.034997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.023 [2024-07-12 11:28:54.035024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.023 qpair failed and we were unable to recover it. 00:24:28.023 [2024-07-12 11:28:54.035135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.023 [2024-07-12 11:28:54.035162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.023 qpair failed and we were unable to recover it. 00:24:28.023 [2024-07-12 11:28:54.035317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.023 [2024-07-12 11:28:54.035344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.023 qpair failed and we were unable to recover it. 00:24:28.023 [2024-07-12 11:28:54.035466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.023 [2024-07-12 11:28:54.035492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.023 qpair failed and we were unable to recover it. 00:24:28.023 [2024-07-12 11:28:54.035621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.023 [2024-07-12 11:28:54.035648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.023 qpair failed and we were unable to recover it. 00:24:28.023 [2024-07-12 11:28:54.035738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.023 [2024-07-12 11:28:54.035764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.023 qpair failed and we were unable to recover it. 00:24:28.023 [2024-07-12 11:28:54.035861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.023 [2024-07-12 11:28:54.035897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.023 qpair failed and we were unable to recover it. 00:24:28.023 [2024-07-12 11:28:54.035990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.023 [2024-07-12 11:28:54.036017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.023 qpair failed and we were unable to recover it. 00:24:28.023 [2024-07-12 11:28:54.036112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.023 [2024-07-12 11:28:54.036139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.023 qpair failed and we were unable to recover it. 00:24:28.023 [2024-07-12 11:28:54.036223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.023 [2024-07-12 11:28:54.036250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.023 qpair failed and we were unable to recover it. 00:24:28.023 [2024-07-12 11:28:54.036340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.023 [2024-07-12 11:28:54.036367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.023 qpair failed and we were unable to recover it. 00:24:28.023 [2024-07-12 11:28:54.036458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.023 [2024-07-12 11:28:54.036485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.023 qpair failed and we were unable to recover it. 00:24:28.023 [2024-07-12 11:28:54.036607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.023 [2024-07-12 11:28:54.036634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.023 qpair failed and we were unable to recover it. 00:24:28.023 [2024-07-12 11:28:54.036754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.023 [2024-07-12 11:28:54.036781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.023 qpair failed and we were unable to recover it. 00:24:28.023 [2024-07-12 11:28:54.036898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.023 [2024-07-12 11:28:54.036926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.023 qpair failed and we were unable to recover it. 00:24:28.023 [2024-07-12 11:28:54.037037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.023 [2024-07-12 11:28:54.037064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.023 qpair failed and we were unable to recover it. 00:24:28.023 [2024-07-12 11:28:54.037216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.023 [2024-07-12 11:28:54.037242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.023 qpair failed and we were unable to recover it. 00:24:28.023 [2024-07-12 11:28:54.037356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.023 [2024-07-12 11:28:54.037383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.024 qpair failed and we were unable to recover it. 00:24:28.024 [2024-07-12 11:28:54.037502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.024 [2024-07-12 11:28:54.037529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.024 qpair failed and we were unable to recover it. 00:24:28.024 [2024-07-12 11:28:54.037625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.024 [2024-07-12 11:28:54.037651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.024 qpair failed and we were unable to recover it. 00:24:28.024 [2024-07-12 11:28:54.037749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.024 [2024-07-12 11:28:54.037776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.024 qpair failed and we were unable to recover it. 00:24:28.024 [2024-07-12 11:28:54.037880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.024 [2024-07-12 11:28:54.037908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.024 qpair failed and we were unable to recover it. 00:24:28.024 [2024-07-12 11:28:54.038031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.024 [2024-07-12 11:28:54.038062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.024 qpair failed and we were unable to recover it. 00:24:28.024 [2024-07-12 11:28:54.038146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.024 [2024-07-12 11:28:54.038173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.024 qpair failed and we were unable to recover it. 00:24:28.024 [2024-07-12 11:28:54.038284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.024 [2024-07-12 11:28:54.038311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.024 qpair failed and we were unable to recover it. 00:24:28.024 [2024-07-12 11:28:54.038407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.024 [2024-07-12 11:28:54.038433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.024 qpair failed and we were unable to recover it. 00:24:28.024 [2024-07-12 11:28:54.038552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.024 [2024-07-12 11:28:54.038579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.024 qpair failed and we were unable to recover it. 00:24:28.024 [2024-07-12 11:28:54.038672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.024 [2024-07-12 11:28:54.038699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.024 qpair failed and we were unable to recover it. 00:24:28.024 [2024-07-12 11:28:54.038789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.024 [2024-07-12 11:28:54.038817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.024 qpair failed and we were unable to recover it. 00:24:28.024 [2024-07-12 11:28:54.038921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.024 [2024-07-12 11:28:54.038949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.024 qpair failed and we were unable to recover it. 00:24:28.024 [2024-07-12 11:28:54.039073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.024 [2024-07-12 11:28:54.039100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.024 qpair failed and we were unable to recover it. 00:24:28.024 [2024-07-12 11:28:54.039203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.024 [2024-07-12 11:28:54.039229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.024 qpair failed and we were unable to recover it. 00:24:28.024 [2024-07-12 11:28:54.039324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.024 [2024-07-12 11:28:54.039352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.024 qpair failed and we were unable to recover it. 00:24:28.024 [2024-07-12 11:28:54.039479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.024 [2024-07-12 11:28:54.039506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.024 qpair failed and we were unable to recover it. 00:24:28.024 [2024-07-12 11:28:54.039600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.024 [2024-07-12 11:28:54.039628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.024 qpair failed and we were unable to recover it. 00:24:28.024 [2024-07-12 11:28:54.039790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.024 [2024-07-12 11:28:54.039816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.024 qpair failed and we were unable to recover it. 00:24:28.024 [2024-07-12 11:28:54.039952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.024 [2024-07-12 11:28:54.039980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.024 qpair failed and we were unable to recover it. 00:24:28.024 [2024-07-12 11:28:54.040095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.024 [2024-07-12 11:28:54.040122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.024 qpair failed and we were unable to recover it. 00:24:28.024 [2024-07-12 11:28:54.040269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.024 [2024-07-12 11:28:54.040295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.024 qpair failed and we were unable to recover it. 00:24:28.024 [2024-07-12 11:28:54.040397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.024 [2024-07-12 11:28:54.040424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.024 qpair failed and we were unable to recover it. 00:24:28.024 [2024-07-12 11:28:54.040547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.024 [2024-07-12 11:28:54.040574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.024 qpair failed and we were unable to recover it. 00:24:28.024 [2024-07-12 11:28:54.040655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.024 [2024-07-12 11:28:54.040682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.024 qpair failed and we were unable to recover it. 00:24:28.024 [2024-07-12 11:28:54.040770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.024 [2024-07-12 11:28:54.040798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.024 qpair failed and we were unable to recover it. 00:24:28.024 [2024-07-12 11:28:54.040929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.024 [2024-07-12 11:28:54.040957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.024 qpair failed and we were unable to recover it. 00:24:28.024 [2024-07-12 11:28:54.041078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.024 [2024-07-12 11:28:54.041106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.024 qpair failed and we were unable to recover it. 00:24:28.024 [2024-07-12 11:28:54.041234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.024 [2024-07-12 11:28:54.041261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.024 qpair failed and we were unable to recover it. 00:24:28.024 [2024-07-12 11:28:54.041357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.024 [2024-07-12 11:28:54.041384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.024 qpair failed and we were unable to recover it. 00:24:28.024 [2024-07-12 11:28:54.041535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.024 [2024-07-12 11:28:54.041561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.024 qpair failed and we were unable to recover it. 00:24:28.024 [2024-07-12 11:28:54.041647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.024 [2024-07-12 11:28:54.041674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.024 qpair failed and we were unable to recover it. 00:24:28.024 [2024-07-12 11:28:54.041793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.024 [2024-07-12 11:28:54.041819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.024 qpair failed and we were unable to recover it. 00:24:28.024 [2024-07-12 11:28:54.041940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.024 [2024-07-12 11:28:54.041967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.024 qpair failed and we were unable to recover it. 00:24:28.024 [2024-07-12 11:28:54.042058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.024 [2024-07-12 11:28:54.042086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.024 qpair failed and we were unable to recover it. 00:24:28.024 [2024-07-12 11:28:54.042185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.024 [2024-07-12 11:28:54.042212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.024 qpair failed and we were unable to recover it. 00:24:28.024 [2024-07-12 11:28:54.042330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.024 [2024-07-12 11:28:54.042357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.024 qpair failed and we were unable to recover it. 00:24:28.025 [2024-07-12 11:28:54.042450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.025 [2024-07-12 11:28:54.042477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.025 qpair failed and we were unable to recover it. 00:24:28.025 [2024-07-12 11:28:54.042626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.025 [2024-07-12 11:28:54.042653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.025 qpair failed and we were unable to recover it. 00:24:28.025 [2024-07-12 11:28:54.042775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.025 [2024-07-12 11:28:54.042801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.025 qpair failed and we were unable to recover it. 00:24:28.025 [2024-07-12 11:28:54.042891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.025 [2024-07-12 11:28:54.042919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.025 qpair failed and we were unable to recover it. 00:24:28.025 [2024-07-12 11:28:54.043042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.025 [2024-07-12 11:28:54.043069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.025 qpair failed and we were unable to recover it. 00:24:28.025 [2024-07-12 11:28:54.043226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.025 [2024-07-12 11:28:54.043253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.025 qpair failed and we were unable to recover it. 00:24:28.025 [2024-07-12 11:28:54.043349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.025 [2024-07-12 11:28:54.043377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.025 qpair failed and we were unable to recover it. 00:24:28.025 [2024-07-12 11:28:54.043471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.025 [2024-07-12 11:28:54.043498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.025 qpair failed and we were unable to recover it. 00:24:28.025 [2024-07-12 11:28:54.043590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.025 [2024-07-12 11:28:54.043621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.025 qpair failed and we were unable to recover it. 00:24:28.025 [2024-07-12 11:28:54.043745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.025 [2024-07-12 11:28:54.043772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.025 qpair failed and we were unable to recover it. 00:24:28.025 [2024-07-12 11:28:54.043851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.025 [2024-07-12 11:28:54.043889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.025 qpair failed and we were unable to recover it. 00:24:28.025 [2024-07-12 11:28:54.043977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.025 [2024-07-12 11:28:54.044004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.025 qpair failed and we were unable to recover it. 00:24:28.025 [2024-07-12 11:28:54.044114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.025 [2024-07-12 11:28:54.044141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.025 qpair failed and we were unable to recover it. 00:24:28.025 [2024-07-12 11:28:54.044231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.025 [2024-07-12 11:28:54.044259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.025 qpair failed and we were unable to recover it. 00:24:28.025 [2024-07-12 11:28:54.044409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.025 [2024-07-12 11:28:54.044436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.025 qpair failed and we were unable to recover it. 00:24:28.025 [2024-07-12 11:28:54.044526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.025 [2024-07-12 11:28:54.044552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.025 qpair failed and we were unable to recover it. 00:24:28.025 [2024-07-12 11:28:54.044650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.025 [2024-07-12 11:28:54.044677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.025 qpair failed and we were unable to recover it. 00:24:28.025 [2024-07-12 11:28:54.044799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.025 [2024-07-12 11:28:54.044825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.025 qpair failed and we were unable to recover it. 00:24:28.025 [2024-07-12 11:28:54.044997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.025 [2024-07-12 11:28:54.045025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.025 qpair failed and we were unable to recover it. 00:24:28.025 [2024-07-12 11:28:54.045146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.025 [2024-07-12 11:28:54.045173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.025 qpair failed and we were unable to recover it. 00:24:28.025 [2024-07-12 11:28:54.045317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.025 [2024-07-12 11:28:54.045344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.025 qpair failed and we were unable to recover it. 00:24:28.025 [2024-07-12 11:28:54.045438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.025 [2024-07-12 11:28:54.045465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.025 qpair failed and we were unable to recover it. 00:24:28.025 [2024-07-12 11:28:54.045585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.025 [2024-07-12 11:28:54.045612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.025 qpair failed and we were unable to recover it. 00:24:28.025 [2024-07-12 11:28:54.045707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.025 [2024-07-12 11:28:54.045734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.025 qpair failed and we were unable to recover it. 00:24:28.025 [2024-07-12 11:28:54.045853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.025 [2024-07-12 11:28:54.045888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.025 qpair failed and we were unable to recover it. 00:24:28.025 [2024-07-12 11:28:54.045985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.025 [2024-07-12 11:28:54.046012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.025 qpair failed and we were unable to recover it. 00:24:28.025 [2024-07-12 11:28:54.046101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.025 [2024-07-12 11:28:54.046127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.025 qpair failed and we were unable to recover it. 00:24:28.025 [2024-07-12 11:28:54.046214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.025 [2024-07-12 11:28:54.046241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.025 qpair failed and we were unable to recover it. 00:24:28.025 [2024-07-12 11:28:54.046362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.025 [2024-07-12 11:28:54.046389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.025 qpair failed and we were unable to recover it. 00:24:28.025 [2024-07-12 11:28:54.046482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.025 [2024-07-12 11:28:54.046509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.025 qpair failed and we were unable to recover it. 00:24:28.025 [2024-07-12 11:28:54.046603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.025 [2024-07-12 11:28:54.046629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.025 qpair failed and we were unable to recover it. 00:24:28.025 [2024-07-12 11:28:54.046726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.025 [2024-07-12 11:28:54.046753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.025 qpair failed and we were unable to recover it. 00:24:28.025 [2024-07-12 11:28:54.046842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.025 [2024-07-12 11:28:54.046894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.025 qpair failed and we were unable to recover it. 00:24:28.025 [2024-07-12 11:28:54.047015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.025 [2024-07-12 11:28:54.047042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.025 qpair failed and we were unable to recover it. 00:24:28.025 [2024-07-12 11:28:54.047126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.025 [2024-07-12 11:28:54.047153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.025 qpair failed and we were unable to recover it. 00:24:28.025 [2024-07-12 11:28:54.047283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.025 [2024-07-12 11:28:54.047309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.025 qpair failed and we were unable to recover it. 00:24:28.025 [2024-07-12 11:28:54.047431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.026 [2024-07-12 11:28:54.047457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.026 qpair failed and we were unable to recover it. 00:24:28.026 [2024-07-12 11:28:54.047546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.026 [2024-07-12 11:28:54.047574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.026 qpair failed and we were unable to recover it. 00:24:28.026 [2024-07-12 11:28:54.047660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.026 [2024-07-12 11:28:54.047687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.026 qpair failed and we were unable to recover it. 00:24:28.026 [2024-07-12 11:28:54.047804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.026 [2024-07-12 11:28:54.047831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.026 qpair failed and we were unable to recover it. 00:24:28.026 [2024-07-12 11:28:54.047960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.026 [2024-07-12 11:28:54.047988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.026 qpair failed and we were unable to recover it. 00:24:28.026 [2024-07-12 11:28:54.048105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.026 [2024-07-12 11:28:54.048131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.026 qpair failed and we were unable to recover it. 00:24:28.026 [2024-07-12 11:28:54.048250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.026 [2024-07-12 11:28:54.048278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.026 qpair failed and we were unable to recover it. 00:24:28.026 [2024-07-12 11:28:54.048400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.026 [2024-07-12 11:28:54.048427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.026 qpair failed and we were unable to recover it. 00:24:28.026 [2024-07-12 11:28:54.048545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.026 [2024-07-12 11:28:54.048572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.026 qpair failed and we were unable to recover it. 00:24:28.026 [2024-07-12 11:28:54.048666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.026 [2024-07-12 11:28:54.048694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.026 qpair failed and we were unable to recover it. 00:24:28.026 [2024-07-12 11:28:54.048787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.026 [2024-07-12 11:28:54.048813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.026 qpair failed and we were unable to recover it. 00:24:28.026 [2024-07-12 11:28:54.049003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.026 [2024-07-12 11:28:54.049055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.026 qpair failed and we were unable to recover it. 00:24:28.026 [2024-07-12 11:28:54.049158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.026 [2024-07-12 11:28:54.049217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.026 qpair failed and we were unable to recover it. 00:24:28.026 [2024-07-12 11:28:54.049302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.026 [2024-07-12 11:28:54.049328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.026 qpair failed and we were unable to recover it. 00:24:28.026 [2024-07-12 11:28:54.049476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.026 [2024-07-12 11:28:54.049503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.026 qpair failed and we were unable to recover it. 00:24:28.026 [2024-07-12 11:28:54.049653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.026 [2024-07-12 11:28:54.049680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.026 qpair failed and we were unable to recover it. 00:24:28.026 [2024-07-12 11:28:54.049799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.026 [2024-07-12 11:28:54.049825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.026 qpair failed and we were unable to recover it. 00:24:28.026 [2024-07-12 11:28:54.049952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.026 [2024-07-12 11:28:54.049980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.026 qpair failed and we were unable to recover it. 00:24:28.026 [2024-07-12 11:28:54.050058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.026 [2024-07-12 11:28:54.050085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.026 qpair failed and we were unable to recover it. 00:24:28.026 [2024-07-12 11:28:54.050176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.026 [2024-07-12 11:28:54.050202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.026 qpair failed and we were unable to recover it. 00:24:28.026 [2024-07-12 11:28:54.050347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.026 [2024-07-12 11:28:54.050374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.026 qpair failed and we were unable to recover it. 00:24:28.026 [2024-07-12 11:28:54.050518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.026 [2024-07-12 11:28:54.050545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.026 qpair failed and we were unable to recover it. 00:24:28.026 [2024-07-12 11:28:54.050656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.026 [2024-07-12 11:28:54.050682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.026 qpair failed and we were unable to recover it. 00:24:28.026 [2024-07-12 11:28:54.050777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.026 [2024-07-12 11:28:54.050804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.026 qpair failed and we were unable to recover it. 00:24:28.026 [2024-07-12 11:28:54.050920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.026 [2024-07-12 11:28:54.050947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.026 qpair failed and we were unable to recover it. 00:24:28.026 [2024-07-12 11:28:54.051030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.026 [2024-07-12 11:28:54.051057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.027 qpair failed and we were unable to recover it. 00:24:28.027 [2024-07-12 11:28:54.051148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.027 [2024-07-12 11:28:54.051175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.027 qpair failed and we were unable to recover it. 00:24:28.027 [2024-07-12 11:28:54.051299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.027 [2024-07-12 11:28:54.051326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.027 qpair failed and we were unable to recover it. 00:24:28.027 [2024-07-12 11:28:54.051448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.027 [2024-07-12 11:28:54.051475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.027 qpair failed and we were unable to recover it. 00:24:28.027 [2024-07-12 11:28:54.051555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.027 [2024-07-12 11:28:54.051581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.027 qpair failed and we were unable to recover it. 00:24:28.027 [2024-07-12 11:28:54.051704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.027 [2024-07-12 11:28:54.051731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.027 qpair failed and we were unable to recover it. 00:24:28.027 [2024-07-12 11:28:54.051852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.027 [2024-07-12 11:28:54.051889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.027 qpair failed and we were unable to recover it. 00:24:28.027 [2024-07-12 11:28:54.052011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.027 [2024-07-12 11:28:54.052038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.027 qpair failed and we were unable to recover it. 00:24:28.027 [2024-07-12 11:28:54.052128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.027 [2024-07-12 11:28:54.052156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.027 qpair failed and we were unable to recover it. 00:24:28.027 [2024-07-12 11:28:54.052279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.027 [2024-07-12 11:28:54.052306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.027 qpair failed and we were unable to recover it. 00:24:28.027 [2024-07-12 11:28:54.052428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.027 [2024-07-12 11:28:54.052454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.027 qpair failed and we were unable to recover it. 00:24:28.027 [2024-07-12 11:28:54.052602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.027 [2024-07-12 11:28:54.052628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.027 qpair failed and we were unable to recover it. 00:24:28.027 [2024-07-12 11:28:54.052722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.027 [2024-07-12 11:28:54.052749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.027 qpair failed and we were unable to recover it. 00:24:28.027 [2024-07-12 11:28:54.052848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.027 [2024-07-12 11:28:54.052882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.027 qpair failed and we were unable to recover it. 00:24:28.027 [2024-07-12 11:28:54.053039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.027 [2024-07-12 11:28:54.053066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.027 qpair failed and we were unable to recover it. 00:24:28.027 [2024-07-12 11:28:54.053195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.027 [2024-07-12 11:28:54.053223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.027 qpair failed and we were unable to recover it. 00:24:28.027 [2024-07-12 11:28:54.053323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.027 [2024-07-12 11:28:54.053350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.027 qpair failed and we were unable to recover it. 00:24:28.027 [2024-07-12 11:28:54.053426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.027 [2024-07-12 11:28:54.053453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.027 qpair failed and we were unable to recover it. 00:24:28.027 [2024-07-12 11:28:54.053576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.027 [2024-07-12 11:28:54.053602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.027 qpair failed and we were unable to recover it. 00:24:28.027 [2024-07-12 11:28:54.053747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.027 [2024-07-12 11:28:54.053774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.027 qpair failed and we were unable to recover it. 00:24:28.027 [2024-07-12 11:28:54.053878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.027 [2024-07-12 11:28:54.053907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.027 qpair failed and we were unable to recover it. 00:24:28.027 [2024-07-12 11:28:54.053991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.027 [2024-07-12 11:28:54.054018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.027 qpair failed and we were unable to recover it. 00:24:28.027 [2024-07-12 11:28:54.054135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.027 [2024-07-12 11:28:54.054162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.027 qpair failed and we were unable to recover it. 00:24:28.027 [2024-07-12 11:28:54.054309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.027 [2024-07-12 11:28:54.054336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.027 qpair failed and we were unable to recover it. 00:24:28.027 [2024-07-12 11:28:54.054421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.027 [2024-07-12 11:28:54.054448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.027 qpair failed and we were unable to recover it. 00:24:28.027 [2024-07-12 11:28:54.054569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.027 [2024-07-12 11:28:54.054596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.027 qpair failed and we were unable to recover it. 00:24:28.027 [2024-07-12 11:28:54.054714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.027 [2024-07-12 11:28:54.054741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.027 qpair failed and we were unable to recover it. 00:24:28.027 [2024-07-12 11:28:54.054899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.027 [2024-07-12 11:28:54.054932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.027 qpair failed and we were unable to recover it. 00:24:28.027 [2024-07-12 11:28:54.055057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.027 [2024-07-12 11:28:54.055084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.027 qpair failed and we were unable to recover it. 00:24:28.027 [2024-07-12 11:28:54.055180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.027 [2024-07-12 11:28:54.055207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.027 qpair failed and we were unable to recover it. 00:24:28.027 [2024-07-12 11:28:54.055331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.027 [2024-07-12 11:28:54.055357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.027 qpair failed and we were unable to recover it. 00:24:28.027 [2024-07-12 11:28:54.055473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.027 [2024-07-12 11:28:54.055500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.027 qpair failed and we were unable to recover it. 00:24:28.027 [2024-07-12 11:28:54.055595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.027 [2024-07-12 11:28:54.055621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.027 qpair failed and we were unable to recover it. 00:24:28.027 [2024-07-12 11:28:54.055754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.027 [2024-07-12 11:28:54.055780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.027 qpair failed and we were unable to recover it. 00:24:28.027 [2024-07-12 11:28:54.055878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.027 [2024-07-12 11:28:54.055907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.027 qpair failed and we were unable to recover it. 00:24:28.027 [2024-07-12 11:28:54.056055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.027 [2024-07-12 11:28:54.056082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.027 qpair failed and we were unable to recover it. 00:24:28.027 [2024-07-12 11:28:54.056177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.027 [2024-07-12 11:28:54.056204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.027 qpair failed and we were unable to recover it. 00:24:28.028 [2024-07-12 11:28:54.056328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.028 [2024-07-12 11:28:54.056356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.028 qpair failed and we were unable to recover it. 00:24:28.028 [2024-07-12 11:28:54.056440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.028 [2024-07-12 11:28:54.056467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.028 qpair failed and we were unable to recover it. 00:24:28.028 [2024-07-12 11:28:54.056589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.028 [2024-07-12 11:28:54.056617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.028 qpair failed and we were unable to recover it. 00:24:28.028 [2024-07-12 11:28:54.056737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.028 [2024-07-12 11:28:54.056764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.028 qpair failed and we were unable to recover it. 00:24:28.028 [2024-07-12 11:28:54.056890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.028 [2024-07-12 11:28:54.056918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.028 qpair failed and we were unable to recover it. 00:24:28.028 [2024-07-12 11:28:54.057066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.028 [2024-07-12 11:28:54.057093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.028 qpair failed and we were unable to recover it. 00:24:28.028 [2024-07-12 11:28:54.057211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.028 [2024-07-12 11:28:54.057237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.028 qpair failed and we were unable to recover it. 00:24:28.028 [2024-07-12 11:28:54.057332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.028 [2024-07-12 11:28:54.057358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.028 qpair failed and we were unable to recover it. 00:24:28.028 [2024-07-12 11:28:54.057473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.028 [2024-07-12 11:28:54.057500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.028 qpair failed and we were unable to recover it. 00:24:28.028 [2024-07-12 11:28:54.057611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.028 [2024-07-12 11:28:54.057638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.028 qpair failed and we were unable to recover it. 00:24:28.028 [2024-07-12 11:28:54.057722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.028 [2024-07-12 11:28:54.057749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.028 qpair failed and we were unable to recover it. 00:24:28.028 [2024-07-12 11:28:54.057834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.028 [2024-07-12 11:28:54.057861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.028 qpair failed and we were unable to recover it. 00:24:28.028 [2024-07-12 11:28:54.057967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.028 [2024-07-12 11:28:54.057994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.028 qpair failed and we were unable to recover it. 00:24:28.028 [2024-07-12 11:28:54.058093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.028 [2024-07-12 11:28:54.058120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.028 qpair failed and we were unable to recover it. 00:24:28.028 [2024-07-12 11:28:54.058210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.028 [2024-07-12 11:28:54.058237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.028 qpair failed and we were unable to recover it. 00:24:28.028 [2024-07-12 11:28:54.058322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.028 [2024-07-12 11:28:54.058349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.028 qpair failed and we were unable to recover it. 00:24:28.028 [2024-07-12 11:28:54.058448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.028 [2024-07-12 11:28:54.058474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.028 qpair failed and we were unable to recover it. 00:24:28.028 [2024-07-12 11:28:54.058588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.028 [2024-07-12 11:28:54.058615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.028 qpair failed and we were unable to recover it. 00:24:28.028 [2024-07-12 11:28:54.058710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.028 [2024-07-12 11:28:54.058738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.028 qpair failed and we were unable to recover it. 00:24:28.028 [2024-07-12 11:28:54.058828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.028 [2024-07-12 11:28:54.058854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.028 qpair failed and we were unable to recover it. 00:24:28.028 [2024-07-12 11:28:54.058991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.028 [2024-07-12 11:28:54.059018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.028 qpair failed and we were unable to recover it. 00:24:28.028 [2024-07-12 11:28:54.059137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.028 [2024-07-12 11:28:54.059164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.028 qpair failed and we were unable to recover it. 00:24:28.028 [2024-07-12 11:28:54.059254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.028 [2024-07-12 11:28:54.059281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.028 qpair failed and we were unable to recover it. 00:24:28.028 [2024-07-12 11:28:54.059434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.028 [2024-07-12 11:28:54.059460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.028 qpair failed and we were unable to recover it. 00:24:28.028 [2024-07-12 11:28:54.059562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.028 [2024-07-12 11:28:54.059592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.028 qpair failed and we were unable to recover it. 00:24:28.028 [2024-07-12 11:28:54.059703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.028 [2024-07-12 11:28:54.059730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.028 qpair failed and we were unable to recover it. 00:24:28.028 [2024-07-12 11:28:54.059852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.028 [2024-07-12 11:28:54.059890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.028 qpair failed and we were unable to recover it. 00:24:28.028 [2024-07-12 11:28:54.060008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.028 [2024-07-12 11:28:54.060035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.028 qpair failed and we were unable to recover it. 00:24:28.028 [2024-07-12 11:28:54.060147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.028 [2024-07-12 11:28:54.060174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.028 qpair failed and we were unable to recover it. 00:24:28.028 [2024-07-12 11:28:54.060273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.028 [2024-07-12 11:28:54.060299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.028 qpair failed and we were unable to recover it. 00:24:28.028 [2024-07-12 11:28:54.060393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.028 [2024-07-12 11:28:54.060424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.028 qpair failed and we were unable to recover it. 00:24:28.028 [2024-07-12 11:28:54.060550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.028 [2024-07-12 11:28:54.060577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.028 qpair failed and we were unable to recover it. 00:24:28.028 [2024-07-12 11:28:54.060671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.028 [2024-07-12 11:28:54.060697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.028 qpair failed and we were unable to recover it. 00:24:28.028 [2024-07-12 11:28:54.060788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.028 [2024-07-12 11:28:54.060814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.028 qpair failed and we were unable to recover it. 00:24:28.028 [2024-07-12 11:28:54.060913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.028 [2024-07-12 11:28:54.060941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.028 qpair failed and we were unable to recover it. 00:24:28.028 [2024-07-12 11:28:54.061026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.028 [2024-07-12 11:28:54.061053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.028 qpair failed and we were unable to recover it. 00:24:28.028 [2024-07-12 11:28:54.061148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-12 11:28:54.061174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-12 11:28:54.061292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-12 11:28:54.061320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-12 11:28:54.061447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-12 11:28:54.061474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-12 11:28:54.061599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-12 11:28:54.061625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-12 11:28:54.061745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-12 11:28:54.061772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-12 11:28:54.061895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-12 11:28:54.061922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-12 11:28:54.062036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-12 11:28:54.062093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-12 11:28:54.062182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-12 11:28:54.062209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-12 11:28:54.062332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-12 11:28:54.062358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-12 11:28:54.062446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-12 11:28:54.062472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-12 11:28:54.062566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-12 11:28:54.062593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-12 11:28:54.062685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-12 11:28:54.062712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-12 11:28:54.062804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-12 11:28:54.062832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-12 11:28:54.062962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-12 11:28:54.062989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-12 11:28:54.063080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-12 11:28:54.063107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-12 11:28:54.063222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-12 11:28:54.063248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-12 11:28:54.063369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-12 11:28:54.063395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-12 11:28:54.063514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-12 11:28:54.063540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-12 11:28:54.063664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-12 11:28:54.063690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-12 11:28:54.063788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-12 11:28:54.063815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-12 11:28:54.063933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-12 11:28:54.063961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-12 11:28:54.064055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-12 11:28:54.064082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-12 11:28:54.064160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-12 11:28:54.064186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-12 11:28:54.064278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-12 11:28:54.064305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-12 11:28:54.064398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-12 11:28:54.064424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-12 11:28:54.064526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-12 11:28:54.064553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-12 11:28:54.064674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-12 11:28:54.064703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-12 11:28:54.064800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-12 11:28:54.064827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-12 11:28:54.064926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-12 11:28:54.064954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-12 11:28:54.065042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-12 11:28:54.065069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-12 11:28:54.065183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-12 11:28:54.065210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-12 11:28:54.065305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-12 11:28:54.065332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-12 11:28:54.065452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-12 11:28:54.065479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-12 11:28:54.065575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-12 11:28:54.065603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-12 11:28:54.065694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-12 11:28:54.065725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-12 11:28:54.065847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.029 [2024-07-12 11:28:54.065882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.029 qpair failed and we were unable to recover it. 00:24:28.029 [2024-07-12 11:28:54.065970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-12 11:28:54.065996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-12 11:28:54.066096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-12 11:28:54.066123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-12 11:28:54.066237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-12 11:28:54.066263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-12 11:28:54.066339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-12 11:28:54.066366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-12 11:28:54.066493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-12 11:28:54.066520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-12 11:28:54.066643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-12 11:28:54.066670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-12 11:28:54.066764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-12 11:28:54.066790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-12 11:28:54.066917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-12 11:28:54.066945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-12 11:28:54.067036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-12 11:28:54.067064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-12 11:28:54.067212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-12 11:28:54.067239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-12 11:28:54.067329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-12 11:28:54.067356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-12 11:28:54.067480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-12 11:28:54.067507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-12 11:28:54.067604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-12 11:28:54.067630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-12 11:28:54.067717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-12 11:28:54.067743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-12 11:28:54.067830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-12 11:28:54.067857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-12 11:28:54.067965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-12 11:28:54.067991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-12 11:28:54.068081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-12 11:28:54.068108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-12 11:28:54.068227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-12 11:28:54.068254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-12 11:28:54.068346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-12 11:28:54.068372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-12 11:28:54.068464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-12 11:28:54.068491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-12 11:28:54.068579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-12 11:28:54.068605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-12 11:28:54.068725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-12 11:28:54.068751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-12 11:28:54.068838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-12 11:28:54.068873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-12 11:28:54.068995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-12 11:28:54.069022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-12 11:28:54.069108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-12 11:28:54.069135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-12 11:28:54.069292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-12 11:28:54.069319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-12 11:28:54.069407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-12 11:28:54.069433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-12 11:28:54.069530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-12 11:28:54.069557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-12 11:28:54.069646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-12 11:28:54.069673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-12 11:28:54.069754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-12 11:28:54.069781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-12 11:28:54.069875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-12 11:28:54.069902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-12 11:28:54.069996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-12 11:28:54.070023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-12 11:28:54.070115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-12 11:28:54.070143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-12 11:28:54.070232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-12 11:28:54.070259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-12 11:28:54.070341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-12 11:28:54.070368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.030 qpair failed and we were unable to recover it. 00:24:28.030 [2024-07-12 11:28:54.070483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.030 [2024-07-12 11:28:54.070510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-12 11:28:54.070606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-12 11:28:54.070633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-12 11:28:54.070718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-12 11:28:54.070745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-12 11:28:54.070839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-12 11:28:54.070883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-12 11:28:54.070986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-12 11:28:54.071013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-12 11:28:54.071096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-12 11:28:54.071123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-12 11:28:54.071213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-12 11:28:54.071241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-12 11:28:54.071321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-12 11:28:54.071348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-12 11:28:54.071435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-12 11:28:54.071463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-12 11:28:54.071563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-12 11:28:54.071590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-12 11:28:54.071706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-12 11:28:54.071733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-12 11:28:54.071828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-12 11:28:54.071855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-12 11:28:54.072008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5d0e0 is same with the state(5) to be set 00:24:28.031 [2024-07-12 11:28:54.072181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-12 11:28:54.072223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-12 11:28:54.072338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-12 11:28:54.072379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-12 11:28:54.072477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-12 11:28:54.072506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-12 11:28:54.072607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-12 11:28:54.072636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-12 11:28:54.072739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-12 11:28:54.072778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-12 11:28:54.072926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-12 11:28:54.072966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-12 11:28:54.073117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-12 11:28:54.073155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-12 11:28:54.073308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-12 11:28:54.073345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-12 11:28:54.073533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-12 11:28:54.073570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-12 11:28:54.073690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-12 11:28:54.073719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-12 11:28:54.073840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-12 11:28:54.073873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-12 11:28:54.073981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-12 11:28:54.074036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-12 11:28:54.074145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-12 11:28:54.074204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-12 11:28:54.074347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-12 11:28:54.074398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-12 11:28:54.074485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-12 11:28:54.074512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-12 11:28:54.074640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-12 11:28:54.074667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-12 11:28:54.074765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-12 11:28:54.074793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-12 11:28:54.074893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-12 11:28:54.074920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-12 11:28:54.075045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.031 [2024-07-12 11:28:54.075072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.031 qpair failed and we were unable to recover it. 00:24:28.031 [2024-07-12 11:28:54.075167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-12 11:28:54.075194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-12 11:28:54.075308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-12 11:28:54.075334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-12 11:28:54.075433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-12 11:28:54.075460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-12 11:28:54.075549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-12 11:28:54.075577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-12 11:28:54.075668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-12 11:28:54.075695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-12 11:28:54.075826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-12 11:28:54.075877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-12 11:28:54.075987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-12 11:28:54.076016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-12 11:28:54.076115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-12 11:28:54.076142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-12 11:28:54.076233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-12 11:28:54.076260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-12 11:28:54.076379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-12 11:28:54.076406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-12 11:28:54.076502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-12 11:28:54.076529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-12 11:28:54.076682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-12 11:28:54.076710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-12 11:28:54.076801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-12 11:28:54.076833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-12 11:28:54.076953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-12 11:28:54.077005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-12 11:28:54.077159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-12 11:28:54.077203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-12 11:28:54.077340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-12 11:28:54.077394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-12 11:28:54.077510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-12 11:28:54.077557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-12 11:28:54.077680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-12 11:28:54.077708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-12 11:28:54.077797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-12 11:28:54.077824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-12 11:28:54.077926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-12 11:28:54.077954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-12 11:28:54.078045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-12 11:28:54.078072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-12 11:28:54.078188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-12 11:28:54.078215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-12 11:28:54.078331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-12 11:28:54.078358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-12 11:28:54.078447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-12 11:28:54.078475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-12 11:28:54.078643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-12 11:28:54.078685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-12 11:28:54.078813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-12 11:28:54.078844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-12 11:28:54.078972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-12 11:28:54.079013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-12 11:28:54.079183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-12 11:28:54.079223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-12 11:28:54.079377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-12 11:28:54.079416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-12 11:28:54.079573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-12 11:28:54.079610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-12 11:28:54.079729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-12 11:28:54.079756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-12 11:28:54.079850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-12 11:28:54.079884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-12 11:28:54.079984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-12 11:28:54.080011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-12 11:28:54.080130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-12 11:28:54.080167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-12 11:28:54.080328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-12 11:28:54.080365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-12 11:28:54.080484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-12 11:28:54.080521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.032 [2024-07-12 11:28:54.080668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.032 [2024-07-12 11:28:54.080706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.032 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-12 11:28:54.080877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-12 11:28:54.080927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-12 11:28:54.081047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-12 11:28:54.081074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-12 11:28:54.081227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-12 11:28:54.081271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-12 11:28:54.081426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-12 11:28:54.081464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-12 11:28:54.081613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-12 11:28:54.081651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-12 11:28:54.081806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-12 11:28:54.081832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-12 11:28:54.081932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-12 11:28:54.081960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-12 11:28:54.082056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-12 11:28:54.082083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-12 11:28:54.082234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-12 11:28:54.082271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-12 11:28:54.082402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-12 11:28:54.082440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-12 11:28:54.082591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-12 11:28:54.082629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-12 11:28:54.082746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-12 11:28:54.082783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-12 11:28:54.082916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-12 11:28:54.082944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-12 11:28:54.083051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-12 11:28:54.083079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-12 11:28:54.083211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-12 11:28:54.083240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-12 11:28:54.083373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-12 11:28:54.083420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-12 11:28:54.083537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-12 11:28:54.083594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-12 11:28:54.083712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-12 11:28:54.083739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-12 11:28:54.083862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-12 11:28:54.083895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-12 11:28:54.084019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-12 11:28:54.084067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-12 11:28:54.084197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-12 11:28:54.084224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-12 11:28:54.084323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-12 11:28:54.084349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-12 11:28:54.084470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-12 11:28:54.084498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-12 11:28:54.084590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-12 11:28:54.084618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-12 11:28:54.084712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-12 11:28:54.084739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-12 11:28:54.084833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-12 11:28:54.084860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-12 11:28:54.084964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-12 11:28:54.084991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-12 11:28:54.085117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-12 11:28:54.085144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-12 11:28:54.085271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-12 11:28:54.085297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-12 11:28:54.085415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-12 11:28:54.085447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-12 11:28:54.085527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-12 11:28:54.085553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-12 11:28:54.085670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-12 11:28:54.085697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-12 11:28:54.085782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-12 11:28:54.085809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-12 11:28:54.085923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-12 11:28:54.085977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-12 11:28:54.086068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-12 11:28:54.086095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.033 qpair failed and we were unable to recover it. 00:24:28.033 [2024-07-12 11:28:54.086176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.033 [2024-07-12 11:28:54.086202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-12 11:28:54.086296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-12 11:28:54.086323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-12 11:28:54.086439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-12 11:28:54.086466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-12 11:28:54.086584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-12 11:28:54.086611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-12 11:28:54.086724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-12 11:28:54.086751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-12 11:28:54.086888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-12 11:28:54.086916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-12 11:28:54.087006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-12 11:28:54.087033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-12 11:28:54.087122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-12 11:28:54.087150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-12 11:28:54.087247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-12 11:28:54.087274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-12 11:28:54.087362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-12 11:28:54.087388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-12 11:28:54.087511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-12 11:28:54.087538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-12 11:28:54.087623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-12 11:28:54.087650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-12 11:28:54.087749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-12 11:28:54.087791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-12 11:28:54.087886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-12 11:28:54.087916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-12 11:28:54.088039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-12 11:28:54.088067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-12 11:28:54.088232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-12 11:28:54.088269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-12 11:28:54.088424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-12 11:28:54.088461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-12 11:28:54.088582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-12 11:28:54.088620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-12 11:28:54.088749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-12 11:28:54.088777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-12 11:28:54.088877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-12 11:28:54.088905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-12 11:28:54.089049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-12 11:28:54.089098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-12 11:28:54.089220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-12 11:28:54.089266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-12 11:28:54.089444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-12 11:28:54.089492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-12 11:28:54.089578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-12 11:28:54.089605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-12 11:28:54.089726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-12 11:28:54.089755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-12 11:28:54.089849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-12 11:28:54.089883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-12 11:28:54.090022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-12 11:28:54.090060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-12 11:28:54.090249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-12 11:28:54.090287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-12 11:28:54.090438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-12 11:28:54.090475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-12 11:28:54.090626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-12 11:28:54.090663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-12 11:28:54.090777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-12 11:28:54.090805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-12 11:28:54.090911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-12 11:28:54.090939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-12 11:28:54.091059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-12 11:28:54.091109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-12 11:28:54.091282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-12 11:28:54.091328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-12 11:28:54.091466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-12 11:28:54.091503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-12 11:28:54.091620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-12 11:28:54.091646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.034 qpair failed and we were unable to recover it. 00:24:28.034 [2024-07-12 11:28:54.091738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.034 [2024-07-12 11:28:54.091765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-12 11:28:54.091879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-12 11:28:54.091907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-12 11:28:54.092017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-12 11:28:54.092044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-12 11:28:54.092135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-12 11:28:54.092162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-12 11:28:54.092282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-12 11:28:54.092309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-12 11:28:54.092391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-12 11:28:54.092418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-12 11:28:54.092533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-12 11:28:54.092560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-12 11:28:54.092654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-12 11:28:54.092680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-12 11:28:54.092775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-12 11:28:54.092802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-12 11:28:54.092905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-12 11:28:54.092947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-12 11:28:54.093053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-12 11:28:54.093081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-12 11:28:54.093200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-12 11:28:54.093227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-12 11:28:54.093357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-12 11:28:54.093385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-12 11:28:54.093476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-12 11:28:54.093503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-12 11:28:54.093596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-12 11:28:54.093622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-12 11:28:54.093745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-12 11:28:54.093772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-12 11:28:54.093880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-12 11:28:54.093907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-12 11:28:54.093999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-12 11:28:54.094026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-12 11:28:54.094174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-12 11:28:54.094211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-12 11:28:54.094330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-12 11:28:54.094368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-12 11:28:54.094514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-12 11:28:54.094552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-12 11:28:54.094667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-12 11:28:54.094704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-12 11:28:54.094824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-12 11:28:54.094861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-12 11:28:54.095012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-12 11:28:54.095040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-12 11:28:54.095191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-12 11:28:54.095242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-12 11:28:54.095395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-12 11:28:54.095445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-12 11:28:54.095589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-12 11:28:54.095641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-12 11:28:54.095764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-12 11:28:54.095790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-12 11:28:54.095880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-12 11:28:54.095908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-12 11:28:54.096065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-12 11:28:54.096114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-12 11:28:54.096279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-12 11:28:54.096318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-12 11:28:54.096451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-12 11:28:54.096488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-12 11:28:54.096601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-12 11:28:54.096638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-12 11:28:54.096821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-12 11:28:54.096848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-12 11:28:54.096962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-12 11:28:54.096989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-12 11:28:54.097115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-12 11:28:54.097142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.035 qpair failed and we were unable to recover it. 00:24:28.035 [2024-07-12 11:28:54.097383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.035 [2024-07-12 11:28:54.097420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-12 11:28:54.097555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-12 11:28:54.097592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-12 11:28:54.097725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-12 11:28:54.097762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-12 11:28:54.097951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-12 11:28:54.097994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-12 11:28:54.098099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-12 11:28:54.098127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-12 11:28:54.098244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-12 11:28:54.098302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-12 11:28:54.098485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-12 11:28:54.098532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-12 11:28:54.098652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-12 11:28:54.098699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-12 11:28:54.098796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-12 11:28:54.098822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-12 11:28:54.098933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-12 11:28:54.098961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-12 11:28:54.099087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-12 11:28:54.099114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-12 11:28:54.099211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-12 11:28:54.099239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-12 11:28:54.099329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-12 11:28:54.099356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-12 11:28:54.099446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-12 11:28:54.099472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-12 11:28:54.099592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-12 11:28:54.099619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-12 11:28:54.099739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-12 11:28:54.099766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-12 11:28:54.099855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-12 11:28:54.099897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-12 11:28:54.099980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-12 11:28:54.100008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-12 11:28:54.100100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-12 11:28:54.100127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-12 11:28:54.100212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-12 11:28:54.100238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-12 11:28:54.100328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-12 11:28:54.100355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-12 11:28:54.100502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-12 11:28:54.100528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-12 11:28:54.100615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-12 11:28:54.100641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-12 11:28:54.100724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-12 11:28:54.100751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-12 11:28:54.100884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-12 11:28:54.100912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-12 11:28:54.101032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-12 11:28:54.101060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-12 11:28:54.101182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-12 11:28:54.101209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-12 11:28:54.101327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-12 11:28:54.101354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-12 11:28:54.101443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-12 11:28:54.101470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-12 11:28:54.101559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-12 11:28:54.101586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-12 11:28:54.101736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-12 11:28:54.101777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.036 qpair failed and we were unable to recover it. 00:24:28.036 [2024-07-12 11:28:54.101908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.036 [2024-07-12 11:28:54.101937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-12 11:28:54.102065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-12 11:28:54.102092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-12 11:28:54.102244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-12 11:28:54.102282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-12 11:28:54.102405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-12 11:28:54.102432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-12 11:28:54.102587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-12 11:28:54.102631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-12 11:28:54.102717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-12 11:28:54.102744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-12 11:28:54.102841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-12 11:28:54.102874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-12 11:28:54.102997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-12 11:28:54.103024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-12 11:28:54.103144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-12 11:28:54.103182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-12 11:28:54.103330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-12 11:28:54.103368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-12 11:28:54.103480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-12 11:28:54.103517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-12 11:28:54.103678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-12 11:28:54.103715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-12 11:28:54.103843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-12 11:28:54.103916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-12 11:28:54.104066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-12 11:28:54.104093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-12 11:28:54.104195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-12 11:28:54.104232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-12 11:28:54.104386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-12 11:28:54.104423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-12 11:28:54.104546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-12 11:28:54.104592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-12 11:28:54.104733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-12 11:28:54.104778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-12 11:28:54.104954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-12 11:28:54.104983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-12 11:28:54.105107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-12 11:28:54.105133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-12 11:28:54.105228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-12 11:28:54.105255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-12 11:28:54.105381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-12 11:28:54.105407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-12 11:28:54.105536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-12 11:28:54.105575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-12 11:28:54.105712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-12 11:28:54.105762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-12 11:28:54.105905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-12 11:28:54.105932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-12 11:28:54.106045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-12 11:28:54.106072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-12 11:28:54.106170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-12 11:28:54.106198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-12 11:28:54.106288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-12 11:28:54.106316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-12 11:28:54.106437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-12 11:28:54.106489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-12 11:28:54.106625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-12 11:28:54.106663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-12 11:28:54.106821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-12 11:28:54.106857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-12 11:28:54.106988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-12 11:28:54.107015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-12 11:28:54.107107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-12 11:28:54.107134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-12 11:28:54.107252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-12 11:28:54.107279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-12 11:28:54.107442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-12 11:28:54.107480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-12 11:28:54.107605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-12 11:28:54.107655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-12 11:28:54.107780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.037 [2024-07-12 11:28:54.107817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.037 qpair failed and we were unable to recover it. 00:24:28.037 [2024-07-12 11:28:54.107960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.038 [2024-07-12 11:28:54.107987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.038 qpair failed and we were unable to recover it. 00:24:28.038 [2024-07-12 11:28:54.108111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.038 [2024-07-12 11:28:54.108138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.038 qpair failed and we were unable to recover it. 00:24:28.038 [2024-07-12 11:28:54.108272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.038 [2024-07-12 11:28:54.108305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.038 qpair failed and we were unable to recover it. 00:24:28.038 [2024-07-12 11:28:54.108431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.038 [2024-07-12 11:28:54.108458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.038 qpair failed and we were unable to recover it. 00:24:28.316 [2024-07-12 11:28:54.108616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.316 [2024-07-12 11:28:54.108653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.316 qpair failed and we were unable to recover it. 00:24:28.316 [2024-07-12 11:28:54.108789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.316 [2024-07-12 11:28:54.108827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.316 qpair failed and we were unable to recover it. 00:24:28.316 [2024-07-12 11:28:54.108989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.316 [2024-07-12 11:28:54.109018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.316 qpair failed and we were unable to recover it. 00:24:28.316 [2024-07-12 11:28:54.109109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.316 [2024-07-12 11:28:54.109136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.316 qpair failed and we were unable to recover it. 00:24:28.316 [2024-07-12 11:28:54.109232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.316 [2024-07-12 11:28:54.109259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.316 qpair failed and we were unable to recover it. 00:24:28.316 [2024-07-12 11:28:54.109395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.316 [2024-07-12 11:28:54.109432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.316 qpair failed and we were unable to recover it. 00:24:28.316 [2024-07-12 11:28:54.109555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.316 [2024-07-12 11:28:54.109599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.316 qpair failed and we were unable to recover it. 00:24:28.316 [2024-07-12 11:28:54.109764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.316 [2024-07-12 11:28:54.109794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.316 qpair failed and we were unable to recover it. 00:24:28.316 [2024-07-12 11:28:54.109920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.316 [2024-07-12 11:28:54.109947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.316 qpair failed and we were unable to recover it. 00:24:28.316 [2024-07-12 11:28:54.110040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.316 [2024-07-12 11:28:54.110071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.316 qpair failed and we were unable to recover it. 00:24:28.316 [2024-07-12 11:28:54.110221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.316 [2024-07-12 11:28:54.110259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.316 qpair failed and we were unable to recover it. 00:24:28.316 [2024-07-12 11:28:54.110387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.316 [2024-07-12 11:28:54.110424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.316 qpair failed and we were unable to recover it. 00:24:28.316 [2024-07-12 11:28:54.110626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.316 [2024-07-12 11:28:54.110688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.316 qpair failed and we were unable to recover it. 00:24:28.316 [2024-07-12 11:28:54.110787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.316 [2024-07-12 11:28:54.110814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.316 qpair failed and we were unable to recover it. 00:24:28.316 [2024-07-12 11:28:54.110920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.316 [2024-07-12 11:28:54.110947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.316 qpair failed and we were unable to recover it. 00:24:28.316 [2024-07-12 11:28:54.111057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.316 [2024-07-12 11:28:54.111093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.316 qpair failed and we were unable to recover it. 00:24:28.316 [2024-07-12 11:28:54.111234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.316 [2024-07-12 11:28:54.111281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.316 qpair failed and we were unable to recover it. 00:24:28.316 [2024-07-12 11:28:54.111426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.316 [2024-07-12 11:28:54.111475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.316 qpair failed and we were unable to recover it. 00:24:28.316 [2024-07-12 11:28:54.111567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.316 [2024-07-12 11:28:54.111593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.316 qpair failed and we were unable to recover it. 00:24:28.316 [2024-07-12 11:28:54.111733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.316 [2024-07-12 11:28:54.111760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.316 qpair failed and we were unable to recover it. 00:24:28.316 [2024-07-12 11:28:54.111851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.316 [2024-07-12 11:28:54.111886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.316 qpair failed and we were unable to recover it. 00:24:28.316 [2024-07-12 11:28:54.112012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.316 [2024-07-12 11:28:54.112044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.316 qpair failed and we were unable to recover it. 00:24:28.316 [2024-07-12 11:28:54.112135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.316 [2024-07-12 11:28:54.112160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.316 qpair failed and we were unable to recover it. 00:24:28.316 [2024-07-12 11:28:54.112286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.316 [2024-07-12 11:28:54.112313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.316 qpair failed and we were unable to recover it. 00:24:28.316 [2024-07-12 11:28:54.112449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.316 [2024-07-12 11:28:54.112496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.316 qpair failed and we were unable to recover it. 00:24:28.316 [2024-07-12 11:28:54.112594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.316 [2024-07-12 11:28:54.112625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.316 qpair failed and we were unable to recover it. 00:24:28.316 [2024-07-12 11:28:54.112715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.316 [2024-07-12 11:28:54.112741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.316 qpair failed and we were unable to recover it. 00:24:28.317 [2024-07-12 11:28:54.112826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-12 11:28:54.112852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.317 qpair failed and we were unable to recover it. 00:24:28.317 [2024-07-12 11:28:54.112953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-12 11:28:54.112981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.317 qpair failed and we were unable to recover it. 00:24:28.317 [2024-07-12 11:28:54.113065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-12 11:28:54.113091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.317 qpair failed and we were unable to recover it. 00:24:28.317 [2024-07-12 11:28:54.113219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-12 11:28:54.113246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.317 qpair failed and we were unable to recover it. 00:24:28.317 [2024-07-12 11:28:54.113361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-12 11:28:54.113388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.317 qpair failed and we were unable to recover it. 00:24:28.317 [2024-07-12 11:28:54.113509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-12 11:28:54.113535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.317 qpair failed and we were unable to recover it. 00:24:28.317 [2024-07-12 11:28:54.113616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-12 11:28:54.113643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.317 qpair failed and we were unable to recover it. 00:24:28.317 [2024-07-12 11:28:54.113762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-12 11:28:54.113789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.317 qpair failed and we were unable to recover it. 00:24:28.317 [2024-07-12 11:28:54.113895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-12 11:28:54.113923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.317 qpair failed and we were unable to recover it. 00:24:28.317 [2024-07-12 11:28:54.114012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-12 11:28:54.114039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.317 qpair failed and we were unable to recover it. 00:24:28.317 [2024-07-12 11:28:54.114118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-12 11:28:54.114145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.317 qpair failed and we were unable to recover it. 00:24:28.317 [2024-07-12 11:28:54.114263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-12 11:28:54.114290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.317 qpair failed and we were unable to recover it. 00:24:28.317 [2024-07-12 11:28:54.114415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-12 11:28:54.114442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.317 qpair failed and we were unable to recover it. 00:24:28.317 [2024-07-12 11:28:54.114531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-12 11:28:54.114560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.317 qpair failed and we were unable to recover it. 00:24:28.317 [2024-07-12 11:28:54.114687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-12 11:28:54.114714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.317 qpair failed and we were unable to recover it. 00:24:28.317 [2024-07-12 11:28:54.114832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-12 11:28:54.114858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.317 qpair failed and we were unable to recover it. 00:24:28.317 [2024-07-12 11:28:54.115006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-12 11:28:54.115053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.317 qpair failed and we were unable to recover it. 00:24:28.317 [2024-07-12 11:28:54.115152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-12 11:28:54.115179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.317 qpair failed and we were unable to recover it. 00:24:28.317 [2024-07-12 11:28:54.115280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-12 11:28:54.115306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.317 qpair failed and we were unable to recover it. 00:24:28.317 [2024-07-12 11:28:54.115404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-12 11:28:54.115431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.317 qpair failed and we were unable to recover it. 00:24:28.317 [2024-07-12 11:28:54.115522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-12 11:28:54.115548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.317 qpair failed and we were unable to recover it. 00:24:28.317 [2024-07-12 11:28:54.115649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-12 11:28:54.115676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.317 qpair failed and we were unable to recover it. 00:24:28.317 [2024-07-12 11:28:54.115795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-12 11:28:54.115822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.317 qpair failed and we were unable to recover it. 00:24:28.317 [2024-07-12 11:28:54.115918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-12 11:28:54.115946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.317 qpair failed and we were unable to recover it. 00:24:28.317 [2024-07-12 11:28:54.116061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-12 11:28:54.116088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.317 qpair failed and we were unable to recover it. 00:24:28.317 [2024-07-12 11:28:54.116200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-12 11:28:54.116228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.317 qpair failed and we were unable to recover it. 00:24:28.317 [2024-07-12 11:28:54.116315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-12 11:28:54.116341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.317 qpair failed and we were unable to recover it. 00:24:28.317 [2024-07-12 11:28:54.116461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-12 11:28:54.116487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.317 qpair failed and we were unable to recover it. 00:24:28.317 [2024-07-12 11:28:54.116603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-12 11:28:54.116630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.317 qpair failed and we were unable to recover it. 00:24:28.317 [2024-07-12 11:28:54.116716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-12 11:28:54.116742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.317 qpair failed and we were unable to recover it. 00:24:28.317 [2024-07-12 11:28:54.116856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-12 11:28:54.116889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.317 qpair failed and we were unable to recover it. 00:24:28.317 [2024-07-12 11:28:54.116989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-12 11:28:54.117015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.317 qpair failed and we were unable to recover it. 00:24:28.317 [2024-07-12 11:28:54.117131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-12 11:28:54.117159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.317 qpair failed and we were unable to recover it. 00:24:28.317 [2024-07-12 11:28:54.117249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-12 11:28:54.117277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.317 qpair failed and we were unable to recover it. 00:24:28.317 [2024-07-12 11:28:54.117378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-12 11:28:54.117404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.317 qpair failed and we were unable to recover it. 00:24:28.317 [2024-07-12 11:28:54.117503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-12 11:28:54.117531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.317 qpair failed and we were unable to recover it. 00:24:28.317 [2024-07-12 11:28:54.117645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-12 11:28:54.117673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.317 qpair failed and we were unable to recover it. 00:24:28.317 [2024-07-12 11:28:54.117791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-12 11:28:54.117817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.317 qpair failed and we were unable to recover it. 00:24:28.317 [2024-07-12 11:28:54.117910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-12 11:28:54.117938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.317 qpair failed and we were unable to recover it. 00:24:28.317 [2024-07-12 11:28:54.118039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-12 11:28:54.118066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.317 qpair failed and we were unable to recover it. 00:24:28.317 [2024-07-12 11:28:54.118182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.317 [2024-07-12 11:28:54.118208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.318 qpair failed and we were unable to recover it. 00:24:28.318 [2024-07-12 11:28:54.118301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.318 [2024-07-12 11:28:54.118329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.318 qpair failed and we were unable to recover it. 00:24:28.318 [2024-07-12 11:28:54.118409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.318 [2024-07-12 11:28:54.118436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.318 qpair failed and we were unable to recover it. 00:24:28.318 [2024-07-12 11:28:54.118535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.318 [2024-07-12 11:28:54.118562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.318 qpair failed and we were unable to recover it. 00:24:28.318 [2024-07-12 11:28:54.118661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.318 [2024-07-12 11:28:54.118688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.318 qpair failed and we were unable to recover it. 00:24:28.318 [2024-07-12 11:28:54.118777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.318 [2024-07-12 11:28:54.118804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.318 qpair failed and we were unable to recover it. 00:24:28.318 [2024-07-12 11:28:54.118924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.318 [2024-07-12 11:28:54.118952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.318 qpair failed and we were unable to recover it. 00:24:28.318 [2024-07-12 11:28:54.119078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.318 [2024-07-12 11:28:54.119105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.318 qpair failed and we were unable to recover it. 00:24:28.318 [2024-07-12 11:28:54.119195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.318 [2024-07-12 11:28:54.119222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.318 qpair failed and we were unable to recover it. 00:24:28.318 [2024-07-12 11:28:54.119318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.318 [2024-07-12 11:28:54.119346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.318 qpair failed and we were unable to recover it. 00:24:28.318 [2024-07-12 11:28:54.119445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.318 [2024-07-12 11:28:54.119472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.318 qpair failed and we were unable to recover it. 00:24:28.318 [2024-07-12 11:28:54.119591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.318 [2024-07-12 11:28:54.119618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.318 qpair failed and we were unable to recover it. 00:24:28.318 [2024-07-12 11:28:54.119745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.318 [2024-07-12 11:28:54.119772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.318 qpair failed and we were unable to recover it. 00:24:28.318 [2024-07-12 11:28:54.119875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.318 [2024-07-12 11:28:54.119903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.318 qpair failed and we were unable to recover it. 00:24:28.318 [2024-07-12 11:28:54.120017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.318 [2024-07-12 11:28:54.120044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.318 qpair failed and we were unable to recover it. 00:24:28.318 [2024-07-12 11:28:54.120161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.318 [2024-07-12 11:28:54.120187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.318 qpair failed and we were unable to recover it. 00:24:28.318 [2024-07-12 11:28:54.120278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.318 [2024-07-12 11:28:54.120306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.318 qpair failed and we were unable to recover it. 00:24:28.318 [2024-07-12 11:28:54.120406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.318 [2024-07-12 11:28:54.120432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.318 qpair failed and we were unable to recover it. 00:24:28.318 [2024-07-12 11:28:54.120520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.318 [2024-07-12 11:28:54.120547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.318 qpair failed and we were unable to recover it. 00:24:28.318 [2024-07-12 11:28:54.120642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.318 [2024-07-12 11:28:54.120668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.318 qpair failed and we were unable to recover it. 00:24:28.318 [2024-07-12 11:28:54.120803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.318 [2024-07-12 11:28:54.120844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.318 qpair failed and we were unable to recover it. 00:24:28.318 [2024-07-12 11:28:54.120950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.318 [2024-07-12 11:28:54.120978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.318 qpair failed and we were unable to recover it. 00:24:28.318 [2024-07-12 11:28:54.121069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.318 [2024-07-12 11:28:54.121096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.318 qpair failed and we were unable to recover it. 00:24:28.318 [2024-07-12 11:28:54.121188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.318 [2024-07-12 11:28:54.121215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.318 qpair failed and we were unable to recover it. 00:24:28.318 [2024-07-12 11:28:54.121309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.318 [2024-07-12 11:28:54.121335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.318 qpair failed and we were unable to recover it. 00:24:28.318 [2024-07-12 11:28:54.121448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.318 [2024-07-12 11:28:54.121480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.318 qpair failed and we were unable to recover it. 00:24:28.318 [2024-07-12 11:28:54.121573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.318 [2024-07-12 11:28:54.121602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.318 qpair failed and we were unable to recover it. 00:24:28.318 [2024-07-12 11:28:54.121734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.318 [2024-07-12 11:28:54.121760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.318 qpair failed and we were unable to recover it. 00:24:28.318 [2024-07-12 11:28:54.121850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.318 [2024-07-12 11:28:54.121887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.318 qpair failed and we were unable to recover it. 00:24:28.318 [2024-07-12 11:28:54.122009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.318 [2024-07-12 11:28:54.122056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.318 qpair failed and we were unable to recover it. 00:24:28.318 [2024-07-12 11:28:54.122197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.318 [2024-07-12 11:28:54.122246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.318 qpair failed and we were unable to recover it. 00:24:28.318 [2024-07-12 11:28:54.122361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.318 [2024-07-12 11:28:54.122388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.318 qpair failed and we were unable to recover it. 00:24:28.318 [2024-07-12 11:28:54.122503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.318 [2024-07-12 11:28:54.122531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.318 qpair failed and we were unable to recover it. 00:24:28.318 [2024-07-12 11:28:54.122654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.318 [2024-07-12 11:28:54.122682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.318 qpair failed and we were unable to recover it. 00:24:28.318 [2024-07-12 11:28:54.122769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.318 [2024-07-12 11:28:54.122796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.318 qpair failed and we were unable to recover it. 00:24:28.318 [2024-07-12 11:28:54.122896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.318 [2024-07-12 11:28:54.122924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.318 qpair failed and we were unable to recover it. 00:24:28.318 [2024-07-12 11:28:54.123020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.318 [2024-07-12 11:28:54.123047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.318 qpair failed and we were unable to recover it. 00:24:28.318 [2024-07-12 11:28:54.123175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.318 [2024-07-12 11:28:54.123202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.318 qpair failed and we were unable to recover it. 00:24:28.318 [2024-07-12 11:28:54.123316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.318 [2024-07-12 11:28:54.123342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.318 qpair failed and we were unable to recover it. 00:24:28.318 [2024-07-12 11:28:54.123438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.318 [2024-07-12 11:28:54.123465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.318 qpair failed and we were unable to recover it. 00:24:28.318 [2024-07-12 11:28:54.123559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.318 [2024-07-12 11:28:54.123585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.318 qpair failed and we were unable to recover it. 00:24:28.318 [2024-07-12 11:28:54.123675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.319 [2024-07-12 11:28:54.123701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.319 qpair failed and we were unable to recover it. 00:24:28.319 [2024-07-12 11:28:54.123790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.319 [2024-07-12 11:28:54.123817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.319 qpair failed and we were unable to recover it. 00:24:28.319 [2024-07-12 11:28:54.123930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.319 [2024-07-12 11:28:54.123957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.319 qpair failed and we were unable to recover it. 00:24:28.319 [2024-07-12 11:28:54.124081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.319 [2024-07-12 11:28:54.124116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.319 qpair failed and we were unable to recover it. 00:24:28.319 [2024-07-12 11:28:54.124235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.319 [2024-07-12 11:28:54.124271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.319 qpair failed and we were unable to recover it. 00:24:28.319 [2024-07-12 11:28:54.124387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.319 [2024-07-12 11:28:54.124422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.319 qpair failed and we were unable to recover it. 00:24:28.319 [2024-07-12 11:28:54.124525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.319 [2024-07-12 11:28:54.124560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.319 qpair failed and we were unable to recover it. 00:24:28.319 [2024-07-12 11:28:54.124709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.319 [2024-07-12 11:28:54.124744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.319 qpair failed and we were unable to recover it. 00:24:28.319 [2024-07-12 11:28:54.124855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.319 [2024-07-12 11:28:54.124913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.319 qpair failed and we were unable to recover it. 00:24:28.319 [2024-07-12 11:28:54.125039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.319 [2024-07-12 11:28:54.125067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.319 qpair failed and we were unable to recover it. 00:24:28.319 [2024-07-12 11:28:54.125154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.319 [2024-07-12 11:28:54.125181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.319 qpair failed and we were unable to recover it. 00:24:28.319 [2024-07-12 11:28:54.125365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.319 [2024-07-12 11:28:54.125419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.319 qpair failed and we were unable to recover it. 00:24:28.319 [2024-07-12 11:28:54.125634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.319 [2024-07-12 11:28:54.125670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.319 qpair failed and we were unable to recover it. 00:24:28.319 [2024-07-12 11:28:54.125800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.319 [2024-07-12 11:28:54.125850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.319 qpair failed and we were unable to recover it. 00:24:28.319 [2024-07-12 11:28:54.126017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.319 [2024-07-12 11:28:54.126044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.319 qpair failed and we were unable to recover it. 00:24:28.319 [2024-07-12 11:28:54.126190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.319 [2024-07-12 11:28:54.126226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.319 qpair failed and we were unable to recover it. 00:24:28.319 [2024-07-12 11:28:54.126402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.319 [2024-07-12 11:28:54.126438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.319 qpair failed and we were unable to recover it. 00:24:28.319 [2024-07-12 11:28:54.126549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.319 [2024-07-12 11:28:54.126584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.319 qpair failed and we were unable to recover it. 00:24:28.319 [2024-07-12 11:28:54.126736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.319 [2024-07-12 11:28:54.126771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.319 qpair failed and we were unable to recover it. 00:24:28.319 [2024-07-12 11:28:54.126910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.319 [2024-07-12 11:28:54.126938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.319 qpair failed and we were unable to recover it. 00:24:28.319 [2024-07-12 11:28:54.127028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.319 [2024-07-12 11:28:54.127055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.319 qpair failed and we were unable to recover it. 00:24:28.319 [2024-07-12 11:28:54.127147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.319 [2024-07-12 11:28:54.127174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.319 qpair failed and we were unable to recover it. 00:24:28.319 [2024-07-12 11:28:54.127252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.319 [2024-07-12 11:28:54.127278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.319 qpair failed and we were unable to recover it. 00:24:28.319 [2024-07-12 11:28:54.127410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.319 [2024-07-12 11:28:54.127446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.319 qpair failed and we were unable to recover it. 00:24:28.319 [2024-07-12 11:28:54.127567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.319 [2024-07-12 11:28:54.127613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.319 qpair failed and we were unable to recover it. 00:24:28.319 [2024-07-12 11:28:54.127747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.319 [2024-07-12 11:28:54.127783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.319 qpair failed and we were unable to recover it. 00:24:28.319 [2024-07-12 11:28:54.127933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.319 [2024-07-12 11:28:54.127960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.319 qpair failed and we were unable to recover it. 00:24:28.319 [2024-07-12 11:28:54.128047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.319 [2024-07-12 11:28:54.128074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.319 qpair failed and we were unable to recover it. 00:24:28.319 [2024-07-12 11:28:54.128161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.319 [2024-07-12 11:28:54.128187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.319 qpair failed and we were unable to recover it. 00:24:28.319 [2024-07-12 11:28:54.128326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.319 [2024-07-12 11:28:54.128361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.319 qpair failed and we were unable to recover it. 00:24:28.319 [2024-07-12 11:28:54.128489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.319 [2024-07-12 11:28:54.128537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.319 qpair failed and we were unable to recover it. 00:24:28.319 [2024-07-12 11:28:54.128680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.319 [2024-07-12 11:28:54.128715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.319 qpair failed and we were unable to recover it. 00:24:28.319 [2024-07-12 11:28:54.128854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.319 [2024-07-12 11:28:54.128914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.319 qpair failed and we were unable to recover it. 00:24:28.319 [2024-07-12 11:28:54.129037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.319 [2024-07-12 11:28:54.129063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.319 qpair failed and we were unable to recover it. 00:24:28.319 [2024-07-12 11:28:54.129164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.319 [2024-07-12 11:28:54.129191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.319 qpair failed and we were unable to recover it. 00:24:28.319 [2024-07-12 11:28:54.129285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.319 [2024-07-12 11:28:54.129312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.319 qpair failed and we were unable to recover it. 00:24:28.319 [2024-07-12 11:28:54.129417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.319 [2024-07-12 11:28:54.129452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.319 qpair failed and we were unable to recover it. 00:24:28.319 [2024-07-12 11:28:54.129576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.319 [2024-07-12 11:28:54.129612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.319 qpair failed and we were unable to recover it. 00:24:28.319 [2024-07-12 11:28:54.129729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.319 [2024-07-12 11:28:54.129765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.319 qpair failed and we were unable to recover it. 00:24:28.319 [2024-07-12 11:28:54.129916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.319 [2024-07-12 11:28:54.129943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.319 qpair failed and we were unable to recover it. 00:24:28.319 [2024-07-12 11:28:54.130070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.319 [2024-07-12 11:28:54.130097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.319 qpair failed and we were unable to recover it. 00:24:28.319 [2024-07-12 11:28:54.130243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.319 [2024-07-12 11:28:54.130278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.320 qpair failed and we were unable to recover it. 00:24:28.320 [2024-07-12 11:28:54.130415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.320 [2024-07-12 11:28:54.130464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.320 qpair failed and we were unable to recover it. 00:24:28.320 [2024-07-12 11:28:54.130618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.320 [2024-07-12 11:28:54.130653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.320 qpair failed and we were unable to recover it. 00:24:28.320 [2024-07-12 11:28:54.130766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.320 [2024-07-12 11:28:54.130802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.320 qpair failed and we were unable to recover it. 00:24:28.320 [2024-07-12 11:28:54.130960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.320 [2024-07-12 11:28:54.130987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.320 qpair failed and we were unable to recover it. 00:24:28.320 [2024-07-12 11:28:54.131083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.320 [2024-07-12 11:28:54.131109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.320 qpair failed and we were unable to recover it. 00:24:28.320 [2024-07-12 11:28:54.131261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.320 [2024-07-12 11:28:54.131296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.320 qpair failed and we were unable to recover it. 00:24:28.320 [2024-07-12 11:28:54.131500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.320 [2024-07-12 11:28:54.131536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.320 qpair failed and we were unable to recover it. 00:24:28.320 [2024-07-12 11:28:54.131679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.320 [2024-07-12 11:28:54.131715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.320 qpair failed and we were unable to recover it. 00:24:28.320 [2024-07-12 11:28:54.131860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.320 [2024-07-12 11:28:54.131917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.320 qpair failed and we were unable to recover it. 00:24:28.320 [2024-07-12 11:28:54.132043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.320 [2024-07-12 11:28:54.132070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.320 qpair failed and we were unable to recover it. 00:24:28.320 [2024-07-12 11:28:54.132161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.320 [2024-07-12 11:28:54.132191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.320 qpair failed and we were unable to recover it. 00:24:28.320 [2024-07-12 11:28:54.132320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.320 [2024-07-12 11:28:54.132355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.320 qpair failed and we were unable to recover it. 00:24:28.320 [2024-07-12 11:28:54.132470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.320 [2024-07-12 11:28:54.132514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.320 qpair failed and we were unable to recover it. 00:24:28.320 [2024-07-12 11:28:54.132658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.320 [2024-07-12 11:28:54.132694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.320 qpair failed and we were unable to recover it. 00:24:28.320 [2024-07-12 11:28:54.132815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.320 [2024-07-12 11:28:54.132850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.320 qpair failed and we were unable to recover it. 00:24:28.320 [2024-07-12 11:28:54.133033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.320 [2024-07-12 11:28:54.133060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.320 qpair failed and we were unable to recover it. 00:24:28.320 [2024-07-12 11:28:54.133153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.320 [2024-07-12 11:28:54.133196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.320 qpair failed and we were unable to recover it. 00:24:28.320 [2024-07-12 11:28:54.133319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.320 [2024-07-12 11:28:54.133355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.320 qpair failed and we were unable to recover it. 00:24:28.320 [2024-07-12 11:28:54.133476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.320 [2024-07-12 11:28:54.133521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.320 qpair failed and we were unable to recover it. 00:24:28.320 [2024-07-12 11:28:54.133639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.320 [2024-07-12 11:28:54.133674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.320 qpair failed and we were unable to recover it. 00:24:28.320 [2024-07-12 11:28:54.133826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.320 [2024-07-12 11:28:54.133862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.320 qpair failed and we were unable to recover it. 00:24:28.320 [2024-07-12 11:28:54.134014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.320 [2024-07-12 11:28:54.134041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.320 qpair failed and we were unable to recover it. 00:24:28.320 [2024-07-12 11:28:54.134137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.320 [2024-07-12 11:28:54.134164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.320 qpair failed and we were unable to recover it. 00:24:28.320 [2024-07-12 11:28:54.134290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.320 [2024-07-12 11:28:54.134326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.320 qpair failed and we were unable to recover it. 00:24:28.320 [2024-07-12 11:28:54.134444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.320 [2024-07-12 11:28:54.134471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.320 qpair failed and we were unable to recover it. 00:24:28.320 [2024-07-12 11:28:54.134552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.320 [2024-07-12 11:28:54.134578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.320 qpair failed and we were unable to recover it. 00:24:28.320 [2024-07-12 11:28:54.134713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.320 [2024-07-12 11:28:54.134748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.320 qpair failed and we were unable to recover it. 00:24:28.320 [2024-07-12 11:28:54.134882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.320 [2024-07-12 11:28:54.134910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.320 qpair failed and we were unable to recover it. 00:24:28.320 [2024-07-12 11:28:54.135026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.320 [2024-07-12 11:28:54.135053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.320 qpair failed and we were unable to recover it. 00:24:28.320 [2024-07-12 11:28:54.135133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.320 [2024-07-12 11:28:54.135181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.320 qpair failed and we were unable to recover it. 00:24:28.320 [2024-07-12 11:28:54.135323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.320 [2024-07-12 11:28:54.135349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.320 qpair failed and we were unable to recover it. 00:24:28.320 [2024-07-12 11:28:54.135436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.320 [2024-07-12 11:28:54.135462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.320 qpair failed and we were unable to recover it. 00:24:28.320 [2024-07-12 11:28:54.135564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.320 [2024-07-12 11:28:54.135590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.320 qpair failed and we were unable to recover it. 00:24:28.320 [2024-07-12 11:28:54.135713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.320 [2024-07-12 11:28:54.135740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.320 qpair failed and we were unable to recover it. 00:24:28.321 [2024-07-12 11:28:54.135836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.321 [2024-07-12 11:28:54.135862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.321 qpair failed and we were unable to recover it. 00:24:28.321 [2024-07-12 11:28:54.135966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.321 [2024-07-12 11:28:54.135993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.321 qpair failed and we were unable to recover it. 00:24:28.321 [2024-07-12 11:28:54.136146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.321 [2024-07-12 11:28:54.136181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.321 qpair failed and we were unable to recover it. 00:24:28.321 [2024-07-12 11:28:54.136296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.321 [2024-07-12 11:28:54.136340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.321 qpair failed and we were unable to recover it. 00:24:28.321 [2024-07-12 11:28:54.136458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.321 [2024-07-12 11:28:54.136495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.321 qpair failed and we were unable to recover it. 00:24:28.321 [2024-07-12 11:28:54.136626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.321 [2024-07-12 11:28:54.136662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.321 qpair failed and we were unable to recover it. 00:24:28.321 [2024-07-12 11:28:54.136777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.321 [2024-07-12 11:28:54.136812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.321 qpair failed and we were unable to recover it. 00:24:28.321 [2024-07-12 11:28:54.136935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.321 [2024-07-12 11:28:54.136972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.321 qpair failed and we were unable to recover it. 00:24:28.321 [2024-07-12 11:28:54.137089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.321 [2024-07-12 11:28:54.137125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.321 qpair failed and we were unable to recover it. 00:24:28.321 [2024-07-12 11:28:54.137250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.321 [2024-07-12 11:28:54.137285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.321 qpair failed and we were unable to recover it. 00:24:28.321 [2024-07-12 11:28:54.137433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.321 [2024-07-12 11:28:54.137468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.321 qpair failed and we were unable to recover it. 00:24:28.321 [2024-07-12 11:28:54.137625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.321 [2024-07-12 11:28:54.137660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.321 qpair failed and we were unable to recover it. 00:24:28.321 [2024-07-12 11:28:54.137792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.321 [2024-07-12 11:28:54.137843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.321 qpair failed and we were unable to recover it. 00:24:28.321 [2024-07-12 11:28:54.138072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.321 [2024-07-12 11:28:54.138107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.321 qpair failed and we were unable to recover it. 00:24:28.321 [2024-07-12 11:28:54.138217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.321 [2024-07-12 11:28:54.138252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.321 qpair failed and we were unable to recover it. 00:24:28.321 [2024-07-12 11:28:54.138360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.321 [2024-07-12 11:28:54.138396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.321 qpair failed and we were unable to recover it. 00:24:28.321 [2024-07-12 11:28:54.138550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.321 [2024-07-12 11:28:54.138585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.321 qpair failed and we were unable to recover it. 00:24:28.321 [2024-07-12 11:28:54.138699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.321 [2024-07-12 11:28:54.138763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.321 qpair failed and we were unable to recover it. 00:24:28.321 [2024-07-12 11:28:54.138939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.321 [2024-07-12 11:28:54.138975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.321 qpair failed and we were unable to recover it. 00:24:28.321 [2024-07-12 11:28:54.139083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.321 [2024-07-12 11:28:54.139119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.321 qpair failed and we were unable to recover it. 00:24:28.321 [2024-07-12 11:28:54.139274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.321 [2024-07-12 11:28:54.139309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.321 qpair failed and we were unable to recover it. 00:24:28.321 [2024-07-12 11:28:54.139455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.321 [2024-07-12 11:28:54.139491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.321 qpair failed and we were unable to recover it. 00:24:28.321 [2024-07-12 11:28:54.139597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.321 [2024-07-12 11:28:54.139633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.321 qpair failed and we were unable to recover it. 00:24:28.321 [2024-07-12 11:28:54.139766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.321 [2024-07-12 11:28:54.139803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.321 qpair failed and we were unable to recover it. 00:24:28.321 [2024-07-12 11:28:54.139948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.321 [2024-07-12 11:28:54.139985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.321 qpair failed and we were unable to recover it. 00:24:28.321 [2024-07-12 11:28:54.140111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.321 [2024-07-12 11:28:54.140147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.321 qpair failed and we were unable to recover it. 00:24:28.321 [2024-07-12 11:28:54.140265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.321 [2024-07-12 11:28:54.140300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.321 qpair failed and we were unable to recover it. 00:24:28.321 [2024-07-12 11:28:54.140458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.321 [2024-07-12 11:28:54.140494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.321 qpair failed and we were unable to recover it. 00:24:28.321 [2024-07-12 11:28:54.140612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.321 [2024-07-12 11:28:54.140647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.321 qpair failed and we were unable to recover it. 00:24:28.321 [2024-07-12 11:28:54.140771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.321 [2024-07-12 11:28:54.140834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.321 qpair failed and we were unable to recover it. 00:24:28.321 [2024-07-12 11:28:54.141010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.321 [2024-07-12 11:28:54.141046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.321 qpair failed and we were unable to recover it. 00:24:28.321 [2024-07-12 11:28:54.141170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.321 [2024-07-12 11:28:54.141206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.321 qpair failed and we were unable to recover it. 00:24:28.321 [2024-07-12 11:28:54.141345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.321 [2024-07-12 11:28:54.141380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.321 qpair failed and we were unable to recover it. 00:24:28.321 [2024-07-12 11:28:54.141499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.321 [2024-07-12 11:28:54.141534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.321 qpair failed and we were unable to recover it. 00:24:28.321 [2024-07-12 11:28:54.141682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.321 [2024-07-12 11:28:54.141717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.321 qpair failed and we were unable to recover it. 00:24:28.321 [2024-07-12 11:28:54.141831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.321 [2024-07-12 11:28:54.141872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.321 qpair failed and we were unable to recover it. 00:24:28.321 [2024-07-12 11:28:54.141985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.321 [2024-07-12 11:28:54.142021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.321 qpair failed and we were unable to recover it. 00:24:28.321 [2024-07-12 11:28:54.142144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.321 [2024-07-12 11:28:54.142180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.321 qpair failed and we were unable to recover it. 00:24:28.321 [2024-07-12 11:28:54.142298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.321 [2024-07-12 11:28:54.142333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.321 qpair failed and we were unable to recover it. 00:24:28.321 [2024-07-12 11:28:54.142489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.321 [2024-07-12 11:28:54.142524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.321 qpair failed and we were unable to recover it. 00:24:28.321 [2024-07-12 11:28:54.142641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.321 [2024-07-12 11:28:54.142677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.321 qpair failed and we were unable to recover it. 00:24:28.321 [2024-07-12 11:28:54.142800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.322 [2024-07-12 11:28:54.142835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.322 qpair failed and we were unable to recover it. 00:24:28.322 [2024-07-12 11:28:54.142966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.322 [2024-07-12 11:28:54.143003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.322 qpair failed and we were unable to recover it. 00:24:28.322 [2024-07-12 11:28:54.143123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.322 [2024-07-12 11:28:54.143158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.322 qpair failed and we were unable to recover it. 00:24:28.322 [2024-07-12 11:28:54.143283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.322 [2024-07-12 11:28:54.143324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.322 qpair failed and we were unable to recover it. 00:24:28.322 [2024-07-12 11:28:54.143471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.322 [2024-07-12 11:28:54.143506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.322 qpair failed and we were unable to recover it. 00:24:28.322 [2024-07-12 11:28:54.143627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.322 [2024-07-12 11:28:54.143662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.322 qpair failed and we were unable to recover it. 00:24:28.322 [2024-07-12 11:28:54.143806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.322 [2024-07-12 11:28:54.143841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.322 qpair failed and we were unable to recover it. 00:24:28.322 [2024-07-12 11:28:54.143965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.322 [2024-07-12 11:28:54.144002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.322 qpair failed and we were unable to recover it. 00:24:28.322 [2024-07-12 11:28:54.144153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.322 [2024-07-12 11:28:54.144189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.322 qpair failed and we were unable to recover it. 00:24:28.322 [2024-07-12 11:28:54.144299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.322 [2024-07-12 11:28:54.144335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.322 qpair failed and we were unable to recover it. 00:24:28.322 [2024-07-12 11:28:54.144490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.322 [2024-07-12 11:28:54.144526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.322 qpair failed and we were unable to recover it. 00:24:28.322 [2024-07-12 11:28:54.144652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.322 [2024-07-12 11:28:54.144687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.322 qpair failed and we were unable to recover it. 00:24:28.322 [2024-07-12 11:28:54.144801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.322 [2024-07-12 11:28:54.144839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.322 qpair failed and we were unable to recover it. 00:24:28.322 [2024-07-12 11:28:54.144985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.322 [2024-07-12 11:28:54.145021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.322 qpair failed and we were unable to recover it. 00:24:28.322 [2024-07-12 11:28:54.145143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.322 [2024-07-12 11:28:54.145178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.322 qpair failed and we were unable to recover it. 00:24:28.322 [2024-07-12 11:28:54.145321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.322 [2024-07-12 11:28:54.145356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.322 qpair failed and we were unable to recover it. 00:24:28.322 [2024-07-12 11:28:54.145474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.322 [2024-07-12 11:28:54.145509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.322 qpair failed and we were unable to recover it. 00:24:28.322 [2024-07-12 11:28:54.145678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.322 [2024-07-12 11:28:54.145714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.322 qpair failed and we were unable to recover it. 00:24:28.322 [2024-07-12 11:28:54.145831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.322 [2024-07-12 11:28:54.145894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.322 qpair failed and we were unable to recover it. 00:24:28.322 [2024-07-12 11:28:54.146013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.322 [2024-07-12 11:28:54.146048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.322 qpair failed and we were unable to recover it. 00:24:28.322 [2024-07-12 11:28:54.146164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.322 [2024-07-12 11:28:54.146200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.322 qpair failed and we were unable to recover it. 00:24:28.322 [2024-07-12 11:28:54.146320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.322 [2024-07-12 11:28:54.146356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.322 qpair failed and we were unable to recover it. 00:24:28.322 [2024-07-12 11:28:54.146480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.322 [2024-07-12 11:28:54.146516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.322 qpair failed and we were unable to recover it. 00:24:28.322 [2024-07-12 11:28:54.146633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.322 [2024-07-12 11:28:54.146669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.322 qpair failed and we were unable to recover it. 00:24:28.322 [2024-07-12 11:28:54.146782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.322 [2024-07-12 11:28:54.146817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.322 qpair failed and we were unable to recover it. 00:24:28.322 [2024-07-12 11:28:54.146931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.322 [2024-07-12 11:28:54.146968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.322 qpair failed and we were unable to recover it. 00:24:28.322 [2024-07-12 11:28:54.147117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.322 [2024-07-12 11:28:54.147152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.322 qpair failed and we were unable to recover it. 00:24:28.322 [2024-07-12 11:28:54.147308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.322 [2024-07-12 11:28:54.147343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.322 qpair failed and we were unable to recover it. 00:24:28.322 [2024-07-12 11:28:54.147526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.322 [2024-07-12 11:28:54.147561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.322 qpair failed and we were unable to recover it. 00:24:28.322 [2024-07-12 11:28:54.147742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.322 [2024-07-12 11:28:54.147777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.322 qpair failed and we were unable to recover it. 00:24:28.322 [2024-07-12 11:28:54.147967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.322 [2024-07-12 11:28:54.148010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.322 qpair failed and we were unable to recover it. 00:24:28.322 [2024-07-12 11:28:54.148133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.322 [2024-07-12 11:28:54.148169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.322 qpair failed and we were unable to recover it. 00:24:28.322 [2024-07-12 11:28:54.148276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.322 [2024-07-12 11:28:54.148312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.322 qpair failed and we were unable to recover it. 00:24:28.322 [2024-07-12 11:28:54.148441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.322 [2024-07-12 11:28:54.148478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.322 qpair failed and we were unable to recover it. 00:24:28.322 [2024-07-12 11:28:54.148596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.322 [2024-07-12 11:28:54.148662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.322 qpair failed and we were unable to recover it. 00:24:28.322 [2024-07-12 11:28:54.148845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.322 [2024-07-12 11:28:54.148916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.322 qpair failed and we were unable to recover it. 00:24:28.322 [2024-07-12 11:28:54.149032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.322 [2024-07-12 11:28:54.149068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.322 qpair failed and we were unable to recover it. 00:24:28.322 [2024-07-12 11:28:54.149222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.322 [2024-07-12 11:28:54.149258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.322 qpair failed and we were unable to recover it. 00:24:28.322 [2024-07-12 11:28:54.149380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.322 [2024-07-12 11:28:54.149417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.322 qpair failed and we were unable to recover it. 00:24:28.322 [2024-07-12 11:28:54.149569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.322 [2024-07-12 11:28:54.149604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.322 qpair failed and we were unable to recover it. 00:24:28.322 [2024-07-12 11:28:54.149732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.322 [2024-07-12 11:28:54.149767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.322 qpair failed and we were unable to recover it. 00:24:28.322 [2024-07-12 11:28:54.149902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.323 [2024-07-12 11:28:54.149957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.323 qpair failed and we were unable to recover it. 00:24:28.323 [2024-07-12 11:28:54.150115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.323 [2024-07-12 11:28:54.150152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.323 qpair failed and we were unable to recover it. 00:24:28.323 [2024-07-12 11:28:54.150300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.323 [2024-07-12 11:28:54.150337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.323 qpair failed and we were unable to recover it. 00:24:28.323 [2024-07-12 11:28:54.150482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.323 [2024-07-12 11:28:54.150540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.323 qpair failed and we were unable to recover it. 00:24:28.323 [2024-07-12 11:28:54.150729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.323 [2024-07-12 11:28:54.150774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.323 qpair failed and we were unable to recover it. 00:24:28.323 [2024-07-12 11:28:54.150954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.323 [2024-07-12 11:28:54.151001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.323 qpair failed and we were unable to recover it. 00:24:28.323 [2024-07-12 11:28:54.151196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.323 [2024-07-12 11:28:54.151234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.323 qpair failed and we were unable to recover it. 00:24:28.323 [2024-07-12 11:28:54.151382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.323 [2024-07-12 11:28:54.151419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.323 qpair failed and we were unable to recover it. 00:24:28.323 [2024-07-12 11:28:54.151569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.323 [2024-07-12 11:28:54.151606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.323 qpair failed and we were unable to recover it. 00:24:28.323 [2024-07-12 11:28:54.151727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.323 [2024-07-12 11:28:54.151789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.323 qpair failed and we were unable to recover it. 00:24:28.323 [2024-07-12 11:28:54.151989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.323 [2024-07-12 11:28:54.152027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.323 qpair failed and we were unable to recover it. 00:24:28.323 [2024-07-12 11:28:54.152217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.323 [2024-07-12 11:28:54.152255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.323 qpair failed and we were unable to recover it. 00:24:28.323 [2024-07-12 11:28:54.152379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.323 [2024-07-12 11:28:54.152415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.323 qpair failed and we were unable to recover it. 00:24:28.323 [2024-07-12 11:28:54.152566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.323 [2024-07-12 11:28:54.152602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.323 qpair failed and we were unable to recover it. 00:24:28.323 [2024-07-12 11:28:54.152730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.323 [2024-07-12 11:28:54.152765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.323 qpair failed and we were unable to recover it. 00:24:28.323 [2024-07-12 11:28:54.152904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.323 [2024-07-12 11:28:54.152943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.323 qpair failed and we were unable to recover it. 00:24:28.323 [2024-07-12 11:28:54.153057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.323 [2024-07-12 11:28:54.153100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.323 qpair failed and we were unable to recover it. 00:24:28.323 [2024-07-12 11:28:54.153262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.323 [2024-07-12 11:28:54.153299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.323 qpair failed and we were unable to recover it. 00:24:28.323 [2024-07-12 11:28:54.153458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.323 [2024-07-12 11:28:54.153495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.323 qpair failed and we were unable to recover it. 00:24:28.323 [2024-07-12 11:28:54.153644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.323 [2024-07-12 11:28:54.153702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.323 qpair failed and we were unable to recover it. 00:24:28.323 [2024-07-12 11:28:54.153889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.323 [2024-07-12 11:28:54.153928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.323 qpair failed and we were unable to recover it. 00:24:28.323 [2024-07-12 11:28:54.154055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.323 [2024-07-12 11:28:54.154090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.323 qpair failed and we were unable to recover it. 00:24:28.323 [2024-07-12 11:28:54.154208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.323 [2024-07-12 11:28:54.154244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.323 qpair failed and we were unable to recover it. 00:24:28.323 [2024-07-12 11:28:54.154396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.323 [2024-07-12 11:28:54.154434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.323 qpair failed and we were unable to recover it. 00:24:28.323 [2024-07-12 11:28:54.154562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.323 [2024-07-12 11:28:54.154626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.323 qpair failed and we were unable to recover it. 00:24:28.323 [2024-07-12 11:28:54.154779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.323 [2024-07-12 11:28:54.154825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.323 qpair failed and we were unable to recover it. 00:24:28.323 [2024-07-12 11:28:54.155035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.323 [2024-07-12 11:28:54.155074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.323 qpair failed and we were unable to recover it. 00:24:28.323 [2024-07-12 11:28:54.155280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.323 [2024-07-12 11:28:54.155319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.323 qpair failed and we were unable to recover it. 00:24:28.323 [2024-07-12 11:28:54.155488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.323 [2024-07-12 11:28:54.155527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.323 qpair failed and we were unable to recover it. 00:24:28.323 [2024-07-12 11:28:54.155652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.323 [2024-07-12 11:28:54.155690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.323 qpair failed and we were unable to recover it. 00:24:28.323 [2024-07-12 11:28:54.155835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.323 [2024-07-12 11:28:54.155885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.323 qpair failed and we were unable to recover it. 00:24:28.323 [2024-07-12 11:28:54.156043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.323 [2024-07-12 11:28:54.156084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.323 qpair failed and we were unable to recover it. 00:24:28.323 [2024-07-12 11:28:54.156245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.323 [2024-07-12 11:28:54.156292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.323 qpair failed and we were unable to recover it. 00:24:28.323 [2024-07-12 11:28:54.156458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.323 [2024-07-12 11:28:54.156495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.323 qpair failed and we were unable to recover it. 00:24:28.323 [2024-07-12 11:28:54.156669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.323 [2024-07-12 11:28:54.156708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.323 qpair failed and we were unable to recover it. 00:24:28.323 [2024-07-12 11:28:54.156876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.323 [2024-07-12 11:28:54.156916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.323 qpair failed and we were unable to recover it. 00:24:28.323 [2024-07-12 11:28:54.157048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.323 [2024-07-12 11:28:54.157090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.323 qpair failed and we were unable to recover it. 00:24:28.323 [2024-07-12 11:28:54.157230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.323 [2024-07-12 11:28:54.157268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.323 qpair failed and we were unable to recover it. 00:24:28.323 [2024-07-12 11:28:54.157424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.323 [2024-07-12 11:28:54.157461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.323 qpair failed and we were unable to recover it. 00:24:28.323 [2024-07-12 11:28:54.157610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.323 [2024-07-12 11:28:54.157648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.323 qpair failed and we were unable to recover it. 00:24:28.323 [2024-07-12 11:28:54.157767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.323 [2024-07-12 11:28:54.157805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.323 qpair failed and we were unable to recover it. 00:24:28.323 [2024-07-12 11:28:54.157986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.324 [2024-07-12 11:28:54.158025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.324 qpair failed and we were unable to recover it. 00:24:28.324 [2024-07-12 11:28:54.158174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.324 [2024-07-12 11:28:54.158212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.324 qpair failed and we were unable to recover it. 00:24:28.324 [2024-07-12 11:28:54.158357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.324 [2024-07-12 11:28:54.158400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.324 qpair failed and we were unable to recover it. 00:24:28.324 [2024-07-12 11:28:54.158535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.324 [2024-07-12 11:28:54.158573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.324 qpair failed and we were unable to recover it. 00:24:28.324 [2024-07-12 11:28:54.158757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.324 [2024-07-12 11:28:54.158795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.324 qpair failed and we were unable to recover it. 00:24:28.324 [2024-07-12 11:28:54.158913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.324 [2024-07-12 11:28:54.158951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.324 qpair failed and we were unable to recover it. 00:24:28.324 [2024-07-12 11:28:54.159100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.324 [2024-07-12 11:28:54.159138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.324 qpair failed and we were unable to recover it. 00:24:28.324 [2024-07-12 11:28:54.159286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.324 [2024-07-12 11:28:54.159323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.324 qpair failed and we were unable to recover it. 00:24:28.324 [2024-07-12 11:28:54.159454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.324 [2024-07-12 11:28:54.159491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.324 qpair failed and we were unable to recover it. 00:24:28.324 [2024-07-12 11:28:54.159675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.324 [2024-07-12 11:28:54.159722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.324 qpair failed and we were unable to recover it. 00:24:28.324 [2024-07-12 11:28:54.159881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.324 [2024-07-12 11:28:54.159920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.324 qpair failed and we were unable to recover it. 00:24:28.324 [2024-07-12 11:28:54.160078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.324 [2024-07-12 11:28:54.160116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.324 qpair failed and we were unable to recover it. 00:24:28.324 [2024-07-12 11:28:54.160227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.324 [2024-07-12 11:28:54.160265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.324 qpair failed and we were unable to recover it. 00:24:28.324 [2024-07-12 11:28:54.160376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.324 [2024-07-12 11:28:54.160413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.324 qpair failed and we were unable to recover it. 00:24:28.324 [2024-07-12 11:28:54.160571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.324 [2024-07-12 11:28:54.160609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.324 qpair failed and we were unable to recover it. 00:24:28.324 [2024-07-12 11:28:54.160735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.324 [2024-07-12 11:28:54.160773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.324 qpair failed and we were unable to recover it. 00:24:28.324 [2024-07-12 11:28:54.160946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.324 [2024-07-12 11:28:54.160985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.324 qpair failed and we were unable to recover it. 00:24:28.324 [2024-07-12 11:28:54.161109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.324 [2024-07-12 11:28:54.161147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.324 qpair failed and we were unable to recover it. 00:24:28.324 [2024-07-12 11:28:54.161262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.324 [2024-07-12 11:28:54.161300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.324 qpair failed and we were unable to recover it. 00:24:28.324 [2024-07-12 11:28:54.161430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.324 [2024-07-12 11:28:54.161467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.324 qpair failed and we were unable to recover it. 00:24:28.324 [2024-07-12 11:28:54.161608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.324 [2024-07-12 11:28:54.161646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.324 qpair failed and we were unable to recover it. 00:24:28.324 [2024-07-12 11:28:54.161810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.324 [2024-07-12 11:28:54.161847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.324 qpair failed and we were unable to recover it. 00:24:28.324 [2024-07-12 11:28:54.161975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.324 [2024-07-12 11:28:54.162013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.324 qpair failed and we were unable to recover it. 00:24:28.324 [2024-07-12 11:28:54.162147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.324 [2024-07-12 11:28:54.162185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.324 qpair failed and we were unable to recover it. 00:24:28.324 [2024-07-12 11:28:54.162337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.324 [2024-07-12 11:28:54.162375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.324 qpair failed and we were unable to recover it. 00:24:28.324 [2024-07-12 11:28:54.162484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.324 [2024-07-12 11:28:54.162521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.324 qpair failed and we were unable to recover it. 00:24:28.324 [2024-07-12 11:28:54.162673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.324 [2024-07-12 11:28:54.162710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.324 qpair failed and we were unable to recover it. 00:24:28.324 [2024-07-12 11:28:54.162854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.324 [2024-07-12 11:28:54.162898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.324 qpair failed and we were unable to recover it. 00:24:28.324 [2024-07-12 11:28:54.163046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.324 [2024-07-12 11:28:54.163084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.324 qpair failed and we were unable to recover it. 00:24:28.324 [2024-07-12 11:28:54.163193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.324 [2024-07-12 11:28:54.163236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.324 qpair failed and we were unable to recover it. 00:24:28.324 [2024-07-12 11:28:54.163362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.324 [2024-07-12 11:28:54.163399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.324 qpair failed and we were unable to recover it. 00:24:28.324 [2024-07-12 11:28:54.163584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.324 [2024-07-12 11:28:54.163621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.324 qpair failed and we were unable to recover it. 00:24:28.324 [2024-07-12 11:28:54.163744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.324 [2024-07-12 11:28:54.163782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.324 qpair failed and we were unable to recover it. 00:24:28.324 [2024-07-12 11:28:54.163901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.324 [2024-07-12 11:28:54.163939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.324 qpair failed and we were unable to recover it. 00:24:28.324 [2024-07-12 11:28:54.164065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.324 [2024-07-12 11:28:54.164104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.324 qpair failed and we were unable to recover it. 00:24:28.324 [2024-07-12 11:28:54.164261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.324 [2024-07-12 11:28:54.164299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.324 qpair failed and we were unable to recover it. 00:24:28.324 [2024-07-12 11:28:54.164450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.325 [2024-07-12 11:28:54.164487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.325 qpair failed and we were unable to recover it. 00:24:28.325 [2024-07-12 11:28:54.164589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.325 [2024-07-12 11:28:54.164627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.325 qpair failed and we were unable to recover it. 00:24:28.325 [2024-07-12 11:28:54.164776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.325 [2024-07-12 11:28:54.164813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.325 qpair failed and we were unable to recover it. 00:24:28.325 [2024-07-12 11:28:54.164959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.325 [2024-07-12 11:28:54.164998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.325 qpair failed and we were unable to recover it. 00:24:28.325 [2024-07-12 11:28:54.165117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.325 [2024-07-12 11:28:54.165154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.325 qpair failed and we were unable to recover it. 00:24:28.325 [2024-07-12 11:28:54.165304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.325 [2024-07-12 11:28:54.165342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.325 qpair failed and we were unable to recover it. 00:24:28.325 [2024-07-12 11:28:54.165481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.325 [2024-07-12 11:28:54.165518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.325 qpair failed and we were unable to recover it. 00:24:28.325 [2024-07-12 11:28:54.165686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.325 [2024-07-12 11:28:54.165724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.325 qpair failed and we were unable to recover it. 00:24:28.325 [2024-07-12 11:28:54.165877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.325 [2024-07-12 11:28:54.165915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.325 qpair failed and we were unable to recover it. 00:24:28.325 [2024-07-12 11:28:54.166103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.325 [2024-07-12 11:28:54.166141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.325 qpair failed and we were unable to recover it. 00:24:28.325 [2024-07-12 11:28:54.166296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.325 [2024-07-12 11:28:54.166334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.325 qpair failed and we were unable to recover it. 00:24:28.325 [2024-07-12 11:28:54.166487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.325 [2024-07-12 11:28:54.166524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.325 qpair failed and we were unable to recover it. 00:24:28.325 [2024-07-12 11:28:54.166684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.325 [2024-07-12 11:28:54.166721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.325 qpair failed and we were unable to recover it. 00:24:28.325 [2024-07-12 11:28:54.166844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.325 [2024-07-12 11:28:54.166900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.325 qpair failed and we were unable to recover it. 00:24:28.325 [2024-07-12 11:28:54.167017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.325 [2024-07-12 11:28:54.167055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.325 qpair failed and we were unable to recover it. 00:24:28.325 [2024-07-12 11:28:54.167208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.325 [2024-07-12 11:28:54.167245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.325 qpair failed and we were unable to recover it. 00:24:28.325 [2024-07-12 11:28:54.167365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.325 [2024-07-12 11:28:54.167402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.325 qpair failed and we were unable to recover it. 00:24:28.325 [2024-07-12 11:28:54.167590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.325 [2024-07-12 11:28:54.167628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.325 qpair failed and we were unable to recover it. 00:24:28.325 [2024-07-12 11:28:54.167749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.325 [2024-07-12 11:28:54.167786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.325 qpair failed and we were unable to recover it. 00:24:28.325 [2024-07-12 11:28:54.167941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.325 [2024-07-12 11:28:54.167980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.325 qpair failed and we were unable to recover it. 00:24:28.325 [2024-07-12 11:28:54.168141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.325 [2024-07-12 11:28:54.168178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.325 qpair failed and we were unable to recover it. 00:24:28.325 [2024-07-12 11:28:54.168314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.325 [2024-07-12 11:28:54.168351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.325 qpair failed and we were unable to recover it. 00:24:28.325 [2024-07-12 11:28:54.168497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.325 [2024-07-12 11:28:54.168534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.325 qpair failed and we were unable to recover it. 00:24:28.325 [2024-07-12 11:28:54.168686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.325 [2024-07-12 11:28:54.168723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.325 qpair failed and we were unable to recover it. 00:24:28.325 [2024-07-12 11:28:54.168881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.325 [2024-07-12 11:28:54.168919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.325 qpair failed and we were unable to recover it. 00:24:28.325 [2024-07-12 11:28:54.169082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.325 [2024-07-12 11:28:54.169120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.325 qpair failed and we were unable to recover it. 00:24:28.325 [2024-07-12 11:28:54.169269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.325 [2024-07-12 11:28:54.169306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.325 qpair failed and we were unable to recover it. 00:24:28.325 [2024-07-12 11:28:54.169451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.325 [2024-07-12 11:28:54.169490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.325 qpair failed and we were unable to recover it. 00:24:28.325 [2024-07-12 11:28:54.169639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.325 [2024-07-12 11:28:54.169678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.325 qpair failed and we were unable to recover it. 00:24:28.325 [2024-07-12 11:28:54.169835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.325 [2024-07-12 11:28:54.169884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.325 qpair failed and we were unable to recover it. 00:24:28.325 [2024-07-12 11:28:54.170044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.325 [2024-07-12 11:28:54.170084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.325 qpair failed and we were unable to recover it. 00:24:28.325 [2024-07-12 11:28:54.170245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.325 [2024-07-12 11:28:54.170285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.325 qpair failed and we were unable to recover it. 00:24:28.325 [2024-07-12 11:28:54.170452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.325 [2024-07-12 11:28:54.170491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.325 qpair failed and we were unable to recover it. 00:24:28.325 [2024-07-12 11:28:54.170646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.325 [2024-07-12 11:28:54.170686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.325 qpair failed and we were unable to recover it. 00:24:28.325 [2024-07-12 11:28:54.170844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.325 [2024-07-12 11:28:54.170905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.325 qpair failed and we were unable to recover it. 00:24:28.325 [2024-07-12 11:28:54.171078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.325 [2024-07-12 11:28:54.171118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.325 qpair failed and we were unable to recover it. 00:24:28.325 [2024-07-12 11:28:54.171291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.325 [2024-07-12 11:28:54.171331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.325 qpair failed and we were unable to recover it. 00:24:28.325 [2024-07-12 11:28:54.171456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.325 [2024-07-12 11:28:54.171500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.325 qpair failed and we were unable to recover it. 00:24:28.325 [2024-07-12 11:28:54.171683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.325 [2024-07-12 11:28:54.171725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.325 qpair failed and we were unable to recover it. 00:24:28.325 [2024-07-12 11:28:54.171853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.325 [2024-07-12 11:28:54.171904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.325 qpair failed and we were unable to recover it. 00:24:28.325 [2024-07-12 11:28:54.172042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.325 [2024-07-12 11:28:54.172082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.325 qpair failed and we were unable to recover it. 00:24:28.325 [2024-07-12 11:28:54.172244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.326 [2024-07-12 11:28:54.172292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.326 qpair failed and we were unable to recover it. 00:24:28.326 [2024-07-12 11:28:54.172439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.326 [2024-07-12 11:28:54.172481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.326 qpair failed and we were unable to recover it. 00:24:28.326 [2024-07-12 11:28:54.172620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.326 [2024-07-12 11:28:54.172660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.326 qpair failed and we were unable to recover it. 00:24:28.326 [2024-07-12 11:28:54.172810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.326 [2024-07-12 11:28:54.172849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.326 qpair failed and we were unable to recover it. 00:24:28.326 [2024-07-12 11:28:54.172990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.326 [2024-07-12 11:28:54.173030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.326 qpair failed and we were unable to recover it. 00:24:28.326 [2024-07-12 11:28:54.173161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.326 [2024-07-12 11:28:54.173200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.326 qpair failed and we were unable to recover it. 00:24:28.326 [2024-07-12 11:28:54.173342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.326 [2024-07-12 11:28:54.173381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.326 qpair failed and we were unable to recover it. 00:24:28.326 [2024-07-12 11:28:54.173501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.326 [2024-07-12 11:28:54.173540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.326 qpair failed and we were unable to recover it. 00:24:28.326 [2024-07-12 11:28:54.173697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.326 [2024-07-12 11:28:54.173737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.326 qpair failed and we were unable to recover it. 00:24:28.326 [2024-07-12 11:28:54.173901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.326 [2024-07-12 11:28:54.173941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.326 qpair failed and we were unable to recover it. 00:24:28.326 [2024-07-12 11:28:54.174085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.326 [2024-07-12 11:28:54.174124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.326 qpair failed and we were unable to recover it. 00:24:28.326 [2024-07-12 11:28:54.174283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.326 [2024-07-12 11:28:54.174322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.326 qpair failed and we were unable to recover it. 00:24:28.326 [2024-07-12 11:28:54.174486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.326 [2024-07-12 11:28:54.174526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.326 qpair failed and we were unable to recover it. 00:24:28.326 [2024-07-12 11:28:54.174651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.326 [2024-07-12 11:28:54.174692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.326 qpair failed and we were unable to recover it. 00:24:28.326 [2024-07-12 11:28:54.174854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.326 [2024-07-12 11:28:54.174921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.326 qpair failed and we were unable to recover it. 00:24:28.326 [2024-07-12 11:28:54.175084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.326 [2024-07-12 11:28:54.175123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.326 qpair failed and we were unable to recover it. 00:24:28.326 [2024-07-12 11:28:54.175246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.326 [2024-07-12 11:28:54.175285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.326 qpair failed and we were unable to recover it. 00:24:28.326 [2024-07-12 11:28:54.175425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.326 [2024-07-12 11:28:54.175465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.326 qpair failed and we were unable to recover it. 00:24:28.326 [2024-07-12 11:28:54.175599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.326 [2024-07-12 11:28:54.175661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.326 qpair failed and we were unable to recover it. 00:24:28.326 [2024-07-12 11:28:54.175852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.326 [2024-07-12 11:28:54.175900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.326 qpair failed and we were unable to recover it. 00:24:28.326 [2024-07-12 11:28:54.176028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.326 [2024-07-12 11:28:54.176074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.326 qpair failed and we were unable to recover it. 00:24:28.326 [2024-07-12 11:28:54.176230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.326 [2024-07-12 11:28:54.176269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.326 qpair failed and we were unable to recover it. 00:24:28.326 [2024-07-12 11:28:54.176409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.326 [2024-07-12 11:28:54.176448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.326 qpair failed and we were unable to recover it. 00:24:28.326 [2024-07-12 11:28:54.176603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.326 [2024-07-12 11:28:54.176642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.326 qpair failed and we were unable to recover it. 00:24:28.326 [2024-07-12 11:28:54.176806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.326 [2024-07-12 11:28:54.176847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.326 qpair failed and we were unable to recover it. 00:24:28.326 [2024-07-12 11:28:54.177035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.326 [2024-07-12 11:28:54.177075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.326 qpair failed and we were unable to recover it. 00:24:28.326 [2024-07-12 11:28:54.177198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.326 [2024-07-12 11:28:54.177237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.326 qpair failed and we were unable to recover it. 00:24:28.326 [2024-07-12 11:28:54.177368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.326 [2024-07-12 11:28:54.177407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.326 qpair failed and we were unable to recover it. 00:24:28.326 [2024-07-12 11:28:54.177558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.326 [2024-07-12 11:28:54.177597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.326 qpair failed and we were unable to recover it. 00:24:28.326 [2024-07-12 11:28:54.177771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.326 [2024-07-12 11:28:54.177810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.326 qpair failed and we were unable to recover it. 00:24:28.326 [2024-07-12 11:28:54.177951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.326 [2024-07-12 11:28:54.177992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.326 qpair failed and we were unable to recover it. 00:24:28.326 [2024-07-12 11:28:54.178141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.326 [2024-07-12 11:28:54.178181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.326 qpair failed and we were unable to recover it. 00:24:28.326 [2024-07-12 11:28:54.178374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.326 [2024-07-12 11:28:54.178414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.326 qpair failed and we were unable to recover it. 00:24:28.326 [2024-07-12 11:28:54.178532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.326 [2024-07-12 11:28:54.178572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.326 qpair failed and we were unable to recover it. 00:24:28.326 [2024-07-12 11:28:54.178693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.326 [2024-07-12 11:28:54.178733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.326 qpair failed and we were unable to recover it. 00:24:28.326 [2024-07-12 11:28:54.178908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.326 [2024-07-12 11:28:54.178949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.326 qpair failed and we were unable to recover it. 00:24:28.326 [2024-07-12 11:28:54.179077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.326 [2024-07-12 11:28:54.179116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.326 qpair failed and we were unable to recover it. 00:24:28.326 [2024-07-12 11:28:54.179236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.326 [2024-07-12 11:28:54.179275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.326 qpair failed and we were unable to recover it. 00:24:28.326 [2024-07-12 11:28:54.179424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.326 [2024-07-12 11:28:54.179463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.326 qpair failed and we were unable to recover it. 00:24:28.326 [2024-07-12 11:28:54.179625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.326 [2024-07-12 11:28:54.179665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.326 qpair failed and we were unable to recover it. 00:24:28.326 [2024-07-12 11:28:54.179825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.326 [2024-07-12 11:28:54.179864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.326 qpair failed and we were unable to recover it. 00:24:28.326 [2024-07-12 11:28:54.180029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.326 [2024-07-12 11:28:54.180068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.327 qpair failed and we were unable to recover it. 00:24:28.327 [2024-07-12 11:28:54.180218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.327 [2024-07-12 11:28:54.180258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.327 qpair failed and we were unable to recover it. 00:24:28.327 [2024-07-12 11:28:54.180421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.327 [2024-07-12 11:28:54.180460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.327 qpair failed and we were unable to recover it. 00:24:28.327 [2024-07-12 11:28:54.180655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.327 [2024-07-12 11:28:54.180694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.327 qpair failed and we were unable to recover it. 00:24:28.327 [2024-07-12 11:28:54.180848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.327 [2024-07-12 11:28:54.180895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.327 qpair failed and we were unable to recover it. 00:24:28.327 [2024-07-12 11:28:54.181093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.327 [2024-07-12 11:28:54.181132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.327 qpair failed and we were unable to recover it. 00:24:28.327 [2024-07-12 11:28:54.181259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.327 [2024-07-12 11:28:54.181298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.327 qpair failed and we were unable to recover it. 00:24:28.327 [2024-07-12 11:28:54.181470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.327 [2024-07-12 11:28:54.181509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.327 qpair failed and we were unable to recover it. 00:24:28.327 [2024-07-12 11:28:54.181670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.327 [2024-07-12 11:28:54.181709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.327 qpair failed and we were unable to recover it. 00:24:28.327 [2024-07-12 11:28:54.181844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.327 [2024-07-12 11:28:54.181892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.327 qpair failed and we were unable to recover it. 00:24:28.327 [2024-07-12 11:28:54.182049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.327 [2024-07-12 11:28:54.182088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.327 qpair failed and we were unable to recover it. 00:24:28.327 [2024-07-12 11:28:54.182209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.327 [2024-07-12 11:28:54.182250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.327 qpair failed and we were unable to recover it. 00:24:28.327 [2024-07-12 11:28:54.182423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.327 [2024-07-12 11:28:54.182462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.327 qpair failed and we were unable to recover it. 00:24:28.327 [2024-07-12 11:28:54.182569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.327 [2024-07-12 11:28:54.182608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.327 qpair failed and we were unable to recover it. 00:24:28.327 [2024-07-12 11:28:54.182803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.327 [2024-07-12 11:28:54.182842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.327 qpair failed and we were unable to recover it. 00:24:28.327 [2024-07-12 11:28:54.183027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.327 [2024-07-12 11:28:54.183066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.327 qpair failed and we were unable to recover it. 00:24:28.327 [2024-07-12 11:28:54.183249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.327 [2024-07-12 11:28:54.183289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.327 qpair failed and we were unable to recover it. 00:24:28.327 [2024-07-12 11:28:54.183458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.327 [2024-07-12 11:28:54.183499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.327 qpair failed and we were unable to recover it. 00:24:28.327 [2024-07-12 11:28:54.183630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.327 [2024-07-12 11:28:54.183672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.327 qpair failed and we were unable to recover it. 00:24:28.327 [2024-07-12 11:28:54.183831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.327 [2024-07-12 11:28:54.183884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.327 qpair failed and we were unable to recover it. 00:24:28.327 [2024-07-12 11:28:54.184019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.327 [2024-07-12 11:28:54.184068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.327 qpair failed and we were unable to recover it. 00:24:28.327 [2024-07-12 11:28:54.184193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.327 [2024-07-12 11:28:54.184235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.327 qpair failed and we were unable to recover it. 00:24:28.327 [2024-07-12 11:28:54.184447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.327 [2024-07-12 11:28:54.184488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.327 qpair failed and we were unable to recover it. 00:24:28.327 [2024-07-12 11:28:54.184650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.327 [2024-07-12 11:28:54.184690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.327 qpair failed and we were unable to recover it. 00:24:28.327 [2024-07-12 11:28:54.184824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.327 [2024-07-12 11:28:54.184863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.327 qpair failed and we were unable to recover it. 00:24:28.327 [2024-07-12 11:28:54.185045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.327 [2024-07-12 11:28:54.185084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.327 qpair failed and we were unable to recover it. 00:24:28.327 [2024-07-12 11:28:54.185278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.327 [2024-07-12 11:28:54.185317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.327 qpair failed and we were unable to recover it. 00:24:28.327 [2024-07-12 11:28:54.185489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.327 [2024-07-12 11:28:54.185529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.327 qpair failed and we were unable to recover it. 00:24:28.327 [2024-07-12 11:28:54.185775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.327 [2024-07-12 11:28:54.185828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.327 qpair failed and we were unable to recover it. 00:24:28.327 [2024-07-12 11:28:54.186032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.327 [2024-07-12 11:28:54.186071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.327 qpair failed and we were unable to recover it. 00:24:28.327 [2024-07-12 11:28:54.186205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.327 [2024-07-12 11:28:54.186245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.327 qpair failed and we were unable to recover it. 00:24:28.327 [2024-07-12 11:28:54.186415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.327 [2024-07-12 11:28:54.186455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.327 qpair failed and we were unable to recover it. 00:24:28.327 [2024-07-12 11:28:54.186627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.327 [2024-07-12 11:28:54.186666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.327 qpair failed and we were unable to recover it. 00:24:28.327 [2024-07-12 11:28:54.186805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.327 [2024-07-12 11:28:54.186858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.327 qpair failed and we were unable to recover it. 00:24:28.327 [2024-07-12 11:28:54.187099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.327 [2024-07-12 11:28:54.187139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.327 qpair failed and we were unable to recover it. 00:24:28.327 [2024-07-12 11:28:54.187332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.327 [2024-07-12 11:28:54.187370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.327 qpair failed and we were unable to recover it. 00:24:28.327 [2024-07-12 11:28:54.187530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.327 [2024-07-12 11:28:54.187569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.327 qpair failed and we were unable to recover it. 00:24:28.327 [2024-07-12 11:28:54.187760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.327 [2024-07-12 11:28:54.187799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.327 qpair failed and we were unable to recover it. 00:24:28.327 [2024-07-12 11:28:54.187968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.327 [2024-07-12 11:28:54.188009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.327 qpair failed and we were unable to recover it. 00:24:28.327 [2024-07-12 11:28:54.188149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.327 [2024-07-12 11:28:54.188187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.327 qpair failed and we were unable to recover it. 00:24:28.327 [2024-07-12 11:28:54.188383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.327 [2024-07-12 11:28:54.188422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.327 qpair failed and we were unable to recover it. 00:24:28.327 [2024-07-12 11:28:54.188597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.327 [2024-07-12 11:28:54.188647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.327 qpair failed and we were unable to recover it. 00:24:28.328 [2024-07-12 11:28:54.188790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.328 [2024-07-12 11:28:54.188855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.328 qpair failed and we were unable to recover it. 00:24:28.328 [2024-07-12 11:28:54.189083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.328 [2024-07-12 11:28:54.189141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.328 qpair failed and we were unable to recover it. 00:24:28.328 [2024-07-12 11:28:54.189333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.328 [2024-07-12 11:28:54.189386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.328 qpair failed and we were unable to recover it. 00:24:28.328 [2024-07-12 11:28:54.189598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.328 [2024-07-12 11:28:54.189651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.328 qpair failed and we were unable to recover it. 00:24:28.328 [2024-07-12 11:28:54.189803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.328 [2024-07-12 11:28:54.189842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.328 qpair failed and we were unable to recover it. 00:24:28.328 [2024-07-12 11:28:54.190001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.328 [2024-07-12 11:28:54.190074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.328 qpair failed and we were unable to recover it. 00:24:28.328 [2024-07-12 11:28:54.190359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.328 [2024-07-12 11:28:54.190408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.328 qpair failed and we were unable to recover it. 00:24:28.328 [2024-07-12 11:28:54.190663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.328 [2024-07-12 11:28:54.190716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.328 qpair failed and we were unable to recover it. 00:24:28.328 [2024-07-12 11:28:54.190889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.328 [2024-07-12 11:28:54.190931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.328 qpair failed and we were unable to recover it. 00:24:28.328 [2024-07-12 11:28:54.191102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.328 [2024-07-12 11:28:54.191142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.328 qpair failed and we were unable to recover it. 00:24:28.328 [2024-07-12 11:28:54.191298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.328 [2024-07-12 11:28:54.191337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.328 qpair failed and we were unable to recover it. 00:24:28.328 [2024-07-12 11:28:54.191496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.328 [2024-07-12 11:28:54.191535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.328 qpair failed and we were unable to recover it. 00:24:28.328 [2024-07-12 11:28:54.191695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.328 [2024-07-12 11:28:54.191734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.328 qpair failed and we were unable to recover it. 00:24:28.328 [2024-07-12 11:28:54.191862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.328 [2024-07-12 11:28:54.191908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.328 qpair failed and we were unable to recover it. 00:24:28.328 [2024-07-12 11:28:54.192093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.328 [2024-07-12 11:28:54.192135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.328 qpair failed and we were unable to recover it. 00:24:28.328 [2024-07-12 11:28:54.192317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.328 [2024-07-12 11:28:54.192358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.328 qpair failed and we were unable to recover it. 00:24:28.328 [2024-07-12 11:28:54.192479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.328 [2024-07-12 11:28:54.192520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.328 qpair failed and we were unable to recover it. 00:24:28.328 [2024-07-12 11:28:54.192651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.328 [2024-07-12 11:28:54.192693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.328 qpair failed and we were unable to recover it. 00:24:28.328 [2024-07-12 11:28:54.192875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.328 [2024-07-12 11:28:54.192917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.328 qpair failed and we were unable to recover it. 00:24:28.328 [2024-07-12 11:28:54.193126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.328 [2024-07-12 11:28:54.193190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.328 qpair failed and we were unable to recover it. 00:24:28.328 [2024-07-12 11:28:54.193332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.328 [2024-07-12 11:28:54.193375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.328 qpair failed and we were unable to recover it. 00:24:28.328 [2024-07-12 11:28:54.193506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.328 [2024-07-12 11:28:54.193549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.328 qpair failed and we were unable to recover it. 00:24:28.328 [2024-07-12 11:28:54.193752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.328 [2024-07-12 11:28:54.193794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.328 qpair failed and we were unable to recover it. 00:24:28.328 [2024-07-12 11:28:54.193986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.328 [2024-07-12 11:28:54.194031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.328 qpair failed and we were unable to recover it. 00:24:28.328 [2024-07-12 11:28:54.194237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.328 [2024-07-12 11:28:54.194279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.328 qpair failed and we were unable to recover it. 00:24:28.328 [2024-07-12 11:28:54.194469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.328 [2024-07-12 11:28:54.194510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.328 qpair failed and we were unable to recover it. 00:24:28.328 [2024-07-12 11:28:54.194726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.328 [2024-07-12 11:28:54.194767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.328 qpair failed and we were unable to recover it. 00:24:28.328 [2024-07-12 11:28:54.194976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.328 [2024-07-12 11:28:54.195019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.328 qpair failed and we were unable to recover it. 00:24:28.328 [2024-07-12 11:28:54.195161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.328 [2024-07-12 11:28:54.195202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.328 qpair failed and we were unable to recover it. 00:24:28.328 [2024-07-12 11:28:54.195404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.328 [2024-07-12 11:28:54.195446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.328 qpair failed and we were unable to recover it. 00:24:28.328 [2024-07-12 11:28:54.195616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.328 [2024-07-12 11:28:54.195657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.328 qpair failed and we were unable to recover it. 00:24:28.328 [2024-07-12 11:28:54.195859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.328 [2024-07-12 11:28:54.195933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.328 qpair failed and we were unable to recover it. 00:24:28.328 [2024-07-12 11:28:54.196150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.328 [2024-07-12 11:28:54.196193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.328 qpair failed and we were unable to recover it. 00:24:28.328 [2024-07-12 11:28:54.196374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.328 [2024-07-12 11:28:54.196417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.328 qpair failed and we were unable to recover it. 00:24:28.328 [2024-07-12 11:28:54.196548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.328 [2024-07-12 11:28:54.196590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.328 qpair failed and we were unable to recover it. 00:24:28.328 [2024-07-12 11:28:54.196727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.328 [2024-07-12 11:28:54.196769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.328 qpair failed and we were unable to recover it. 00:24:28.328 [2024-07-12 11:28:54.196935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.328 [2024-07-12 11:28:54.196980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.328 qpair failed and we were unable to recover it. 00:24:28.328 [2024-07-12 11:28:54.197126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.328 [2024-07-12 11:28:54.197169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.328 qpair failed and we were unable to recover it. 00:24:28.328 [2024-07-12 11:28:54.197374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.328 [2024-07-12 11:28:54.197416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.328 qpair failed and we were unable to recover it. 00:24:28.328 [2024-07-12 11:28:54.197586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.328 [2024-07-12 11:28:54.197627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.328 qpair failed and we were unable to recover it. 00:24:28.328 [2024-07-12 11:28:54.197800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.328 [2024-07-12 11:28:54.197841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.328 qpair failed and we were unable to recover it. 00:24:28.329 [2024-07-12 11:28:54.198016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.329 [2024-07-12 11:28:54.198056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.329 qpair failed and we were unable to recover it. 00:24:28.329 [2024-07-12 11:28:54.198191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.329 [2024-07-12 11:28:54.198232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.329 qpair failed and we were unable to recover it. 00:24:28.329 [2024-07-12 11:28:54.198364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.329 [2024-07-12 11:28:54.198405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.329 qpair failed and we were unable to recover it. 00:24:28.329 [2024-07-12 11:28:54.198614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.329 [2024-07-12 11:28:54.198657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.329 qpair failed and we were unable to recover it. 00:24:28.329 [2024-07-12 11:28:54.198796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.329 [2024-07-12 11:28:54.198839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.329 qpair failed and we were unable to recover it. 00:24:28.329 [2024-07-12 11:28:54.199050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.329 [2024-07-12 11:28:54.199114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.329 qpair failed and we were unable to recover it. 00:24:28.329 [2024-07-12 11:28:54.199371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.329 [2024-07-12 11:28:54.199429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.329 qpair failed and we were unable to recover it. 00:24:28.329 [2024-07-12 11:28:54.199663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.329 [2024-07-12 11:28:54.199728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.329 qpair failed and we were unable to recover it. 00:24:28.329 [2024-07-12 11:28:54.200023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.329 [2024-07-12 11:28:54.200081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.329 qpair failed and we were unable to recover it. 00:24:28.329 [2024-07-12 11:28:54.200309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.329 [2024-07-12 11:28:54.200374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.329 qpair failed and we were unable to recover it. 00:24:28.329 [2024-07-12 11:28:54.200620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.329 [2024-07-12 11:28:54.200662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.329 qpair failed and we were unable to recover it. 00:24:28.329 [2024-07-12 11:28:54.200897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.329 [2024-07-12 11:28:54.200953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.329 qpair failed and we were unable to recover it. 00:24:28.329 [2024-07-12 11:28:54.201121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.329 [2024-07-12 11:28:54.201177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.329 qpair failed and we were unable to recover it. 00:24:28.329 [2024-07-12 11:28:54.201346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.329 [2024-07-12 11:28:54.201400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.329 qpair failed and we were unable to recover it. 00:24:28.329 [2024-07-12 11:28:54.201665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.329 [2024-07-12 11:28:54.201707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.329 qpair failed and we were unable to recover it. 00:24:28.329 [2024-07-12 11:28:54.201894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.329 [2024-07-12 11:28:54.201965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.329 qpair failed and we were unable to recover it. 00:24:28.329 [2024-07-12 11:28:54.202246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.329 [2024-07-12 11:28:54.202309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.329 qpair failed and we were unable to recover it. 00:24:28.329 [2024-07-12 11:28:54.202606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.329 [2024-07-12 11:28:54.202669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.329 qpair failed and we were unable to recover it. 00:24:28.329 [2024-07-12 11:28:54.202945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.329 [2024-07-12 11:28:54.202996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.329 qpair failed and we were unable to recover it. 00:24:28.329 [2024-07-12 11:28:54.203148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.329 [2024-07-12 11:28:54.203225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.329 qpair failed and we were unable to recover it. 00:24:28.329 [2024-07-12 11:28:54.203506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.329 [2024-07-12 11:28:54.203549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.329 qpair failed and we were unable to recover it. 00:24:28.329 [2024-07-12 11:28:54.203791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.329 [2024-07-12 11:28:54.203841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.329 qpair failed and we were unable to recover it. 00:24:28.329 [2024-07-12 11:28:54.204050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.329 [2024-07-12 11:28:54.204118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.329 qpair failed and we were unable to recover it. 00:24:28.329 [2024-07-12 11:28:54.204373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.329 [2024-07-12 11:28:54.204428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.329 qpair failed and we were unable to recover it. 00:24:28.329 [2024-07-12 11:28:54.204677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.329 [2024-07-12 11:28:54.204719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.329 qpair failed and we were unable to recover it. 00:24:28.329 [2024-07-12 11:28:54.204947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.329 [2024-07-12 11:28:54.205000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.329 qpair failed and we were unable to recover it. 00:24:28.329 [2024-07-12 11:28:54.205190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.329 [2024-07-12 11:28:54.205241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.329 qpair failed and we were unable to recover it. 00:24:28.329 [2024-07-12 11:28:54.205473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.329 [2024-07-12 11:28:54.205536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.329 qpair failed and we were unable to recover it. 00:24:28.329 [2024-07-12 11:28:54.205792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.329 [2024-07-12 11:28:54.205852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.329 qpair failed and we were unable to recover it. 00:24:28.329 [2024-07-12 11:28:54.206087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.329 [2024-07-12 11:28:54.206141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.329 qpair failed and we were unable to recover it. 00:24:28.329 [2024-07-12 11:28:54.206453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.329 [2024-07-12 11:28:54.206514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.329 qpair failed and we were unable to recover it. 00:24:28.329 [2024-07-12 11:28:54.206676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.329 [2024-07-12 11:28:54.206730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.329 qpair failed and we were unable to recover it. 00:24:28.329 [2024-07-12 11:28:54.206982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.329 [2024-07-12 11:28:54.207038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.329 qpair failed and we were unable to recover it. 00:24:28.329 [2024-07-12 11:28:54.207283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.329 [2024-07-12 11:28:54.207349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.329 qpair failed and we were unable to recover it. 00:24:28.329 [2024-07-12 11:28:54.207614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.329 [2024-07-12 11:28:54.207666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.329 qpair failed and we were unable to recover it. 00:24:28.329 [2024-07-12 11:28:54.207890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.330 [2024-07-12 11:28:54.207945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.330 qpair failed and we were unable to recover it. 00:24:28.330 [2024-07-12 11:28:54.208233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.330 [2024-07-12 11:28:54.208296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.330 qpair failed and we were unable to recover it. 00:24:28.330 [2024-07-12 11:28:54.208512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.330 [2024-07-12 11:28:54.208575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.330 qpair failed and we were unable to recover it. 00:24:28.330 [2024-07-12 11:28:54.208843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.330 [2024-07-12 11:28:54.208947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.330 qpair failed and we were unable to recover it. 00:24:28.330 [2024-07-12 11:28:54.209149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.330 [2024-07-12 11:28:54.209204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.330 qpair failed and we were unable to recover it. 00:24:28.330 [2024-07-12 11:28:54.209415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.330 [2024-07-12 11:28:54.209469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.330 qpair failed and we were unable to recover it. 00:24:28.330 [2024-07-12 11:28:54.209753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.330 [2024-07-12 11:28:54.209816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.330 qpair failed and we were unable to recover it. 00:24:28.330 [2024-07-12 11:28:54.210156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.330 [2024-07-12 11:28:54.210241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.330 qpair failed and we were unable to recover it. 00:24:28.330 [2024-07-12 11:28:54.210524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.330 [2024-07-12 11:28:54.210592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.330 qpair failed and we were unable to recover it. 00:24:28.330 [2024-07-12 11:28:54.210927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.330 [2024-07-12 11:28:54.210986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.330 qpair failed and we were unable to recover it. 00:24:28.330 [2024-07-12 11:28:54.211281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.330 [2024-07-12 11:28:54.211336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.330 qpair failed and we were unable to recover it. 00:24:28.330 [2024-07-12 11:28:54.211579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.330 [2024-07-12 11:28:54.211642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.330 qpair failed and we were unable to recover it. 00:24:28.330 [2024-07-12 11:28:54.211917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.330 [2024-07-12 11:28:54.211972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.330 qpair failed and we were unable to recover it. 00:24:28.330 [2024-07-12 11:28:54.212239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.330 [2024-07-12 11:28:54.212282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.330 qpair failed and we were unable to recover it. 00:24:28.330 [2024-07-12 11:28:54.212486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.330 [2024-07-12 11:28:54.212529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.330 qpair failed and we were unable to recover it. 00:24:28.330 [2024-07-12 11:28:54.212722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.330 [2024-07-12 11:28:54.212798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.330 qpair failed and we were unable to recover it. 00:24:28.330 [2024-07-12 11:28:54.213035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.330 [2024-07-12 11:28:54.213091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.330 qpair failed and we were unable to recover it. 00:24:28.330 [2024-07-12 11:28:54.213364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.330 [2024-07-12 11:28:54.213416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.330 qpair failed and we were unable to recover it. 00:24:28.330 [2024-07-12 11:28:54.213599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.330 [2024-07-12 11:28:54.213650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.330 qpair failed and we were unable to recover it. 00:24:28.330 [2024-07-12 11:28:54.213938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.330 [2024-07-12 11:28:54.213994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.330 qpair failed and we were unable to recover it. 00:24:28.330 [2024-07-12 11:28:54.214182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.330 [2024-07-12 11:28:54.214250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.330 qpair failed and we were unable to recover it. 00:24:28.330 [2024-07-12 11:28:54.214484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.330 [2024-07-12 11:28:54.214548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.330 qpair failed and we were unable to recover it. 00:24:28.330 [2024-07-12 11:28:54.214797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.330 [2024-07-12 11:28:54.214860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.330 qpair failed and we were unable to recover it. 00:24:28.330 [2024-07-12 11:28:54.215147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.330 [2024-07-12 11:28:54.215221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.330 qpair failed and we were unable to recover it. 00:24:28.330 [2024-07-12 11:28:54.215510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.330 [2024-07-12 11:28:54.215573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.330 qpair failed and we were unable to recover it. 00:24:28.330 [2024-07-12 11:28:54.215859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.330 [2024-07-12 11:28:54.215923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.330 qpair failed and we were unable to recover it. 00:24:28.330 [2024-07-12 11:28:54.216205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.330 [2024-07-12 11:28:54.216269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.330 qpair failed and we were unable to recover it. 00:24:28.330 [2024-07-12 11:28:54.216583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.330 [2024-07-12 11:28:54.216646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.330 qpair failed and we were unable to recover it. 00:24:28.330 [2024-07-12 11:28:54.216900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.330 [2024-07-12 11:28:54.216944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.330 qpair failed and we were unable to recover it. 00:24:28.330 [2024-07-12 11:28:54.217149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.330 [2024-07-12 11:28:54.217213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.330 qpair failed and we were unable to recover it. 00:24:28.330 [2024-07-12 11:28:54.217481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.330 [2024-07-12 11:28:54.217544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.330 qpair failed and we were unable to recover it. 00:24:28.330 [2024-07-12 11:28:54.217799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.330 [2024-07-12 11:28:54.217849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.330 qpair failed and we were unable to recover it. 00:24:28.330 [2024-07-12 11:28:54.218029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.330 [2024-07-12 11:28:54.218090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.330 qpair failed and we were unable to recover it. 00:24:28.330 [2024-07-12 11:28:54.218226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.330 [2024-07-12 11:28:54.218271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.330 qpair failed and we were unable to recover it. 00:24:28.330 [2024-07-12 11:28:54.218567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.330 [2024-07-12 11:28:54.218618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.330 qpair failed and we were unable to recover it. 00:24:28.330 [2024-07-12 11:28:54.218874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.330 [2024-07-12 11:28:54.218927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.330 qpair failed and we were unable to recover it. 00:24:28.330 [2024-07-12 11:28:54.219127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.330 [2024-07-12 11:28:54.219212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.330 qpair failed and we were unable to recover it. 00:24:28.330 [2024-07-12 11:28:54.219479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.330 [2024-07-12 11:28:54.219543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.330 qpair failed and we were unable to recover it. 00:24:28.330 [2024-07-12 11:28:54.219794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.330 [2024-07-12 11:28:54.219857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.330 qpair failed and we were unable to recover it. 00:24:28.330 [2024-07-12 11:28:54.220092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.330 [2024-07-12 11:28:54.220155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.330 qpair failed and we were unable to recover it. 00:24:28.330 [2024-07-12 11:28:54.220373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.330 [2024-07-12 11:28:54.220437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.330 qpair failed and we were unable to recover it. 00:24:28.331 [2024-07-12 11:28:54.220733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.331 [2024-07-12 11:28:54.220806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.331 qpair failed and we were unable to recover it. 00:24:28.331 [2024-07-12 11:28:54.221089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.331 [2024-07-12 11:28:54.221132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.331 qpair failed and we were unable to recover it. 00:24:28.331 [2024-07-12 11:28:54.221306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.331 [2024-07-12 11:28:54.221374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.331 qpair failed and we were unable to recover it. 00:24:28.331 [2024-07-12 11:28:54.221662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.331 [2024-07-12 11:28:54.221712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.331 qpair failed and we were unable to recover it. 00:24:28.331 [2024-07-12 11:28:54.221912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.331 [2024-07-12 11:28:54.221964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.331 qpair failed and we were unable to recover it. 00:24:28.331 [2024-07-12 11:28:54.222208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.331 [2024-07-12 11:28:54.222271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.331 qpair failed and we were unable to recover it. 00:24:28.331 [2024-07-12 11:28:54.222528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.331 [2024-07-12 11:28:54.222578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.331 qpair failed and we were unable to recover it. 00:24:28.331 [2024-07-12 11:28:54.222770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.331 [2024-07-12 11:28:54.222820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.331 qpair failed and we were unable to recover it. 00:24:28.331 [2024-07-12 11:28:54.223131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.331 [2024-07-12 11:28:54.223184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.331 qpair failed and we were unable to recover it. 00:24:28.331 [2024-07-12 11:28:54.223390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.331 [2024-07-12 11:28:54.223462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.331 qpair failed and we were unable to recover it. 00:24:28.331 [2024-07-12 11:28:54.223688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.331 [2024-07-12 11:28:54.223754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.331 qpair failed and we were unable to recover it. 00:24:28.331 [2024-07-12 11:28:54.224071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.331 [2024-07-12 11:28:54.224139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.331 qpair failed and we were unable to recover it. 00:24:28.331 [2024-07-12 11:28:54.224425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.331 [2024-07-12 11:28:54.224488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.331 qpair failed and we were unable to recover it. 00:24:28.331 [2024-07-12 11:28:54.224702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.331 [2024-07-12 11:28:54.224765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.331 qpair failed and we were unable to recover it. 00:24:28.331 [2024-07-12 11:28:54.224998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.331 [2024-07-12 11:28:54.225064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.331 qpair failed and we were unable to recover it. 00:24:28.331 [2024-07-12 11:28:54.225339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.331 [2024-07-12 11:28:54.225381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.331 qpair failed and we were unable to recover it. 00:24:28.331 [2024-07-12 11:28:54.225547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.331 [2024-07-12 11:28:54.225614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.331 qpair failed and we were unable to recover it. 00:24:28.331 [2024-07-12 11:28:54.225892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.331 [2024-07-12 11:28:54.225936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.331 qpair failed and we were unable to recover it. 00:24:28.331 [2024-07-12 11:28:54.226125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.331 [2024-07-12 11:28:54.226189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.331 qpair failed and we were unable to recover it. 00:24:28.331 [2024-07-12 11:28:54.226449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.331 [2024-07-12 11:28:54.226511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.331 qpair failed and we were unable to recover it. 00:24:28.331 [2024-07-12 11:28:54.226684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.331 [2024-07-12 11:28:54.226745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.331 qpair failed and we were unable to recover it. 00:24:28.331 [2024-07-12 11:28:54.227016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.331 [2024-07-12 11:28:54.227068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.331 qpair failed and we were unable to recover it. 00:24:28.331 [2024-07-12 11:28:54.227306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.331 [2024-07-12 11:28:54.227349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.331 qpair failed and we were unable to recover it. 00:24:28.331 [2024-07-12 11:28:54.227509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.331 [2024-07-12 11:28:54.227551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.331 qpair failed and we were unable to recover it. 00:24:28.331 [2024-07-12 11:28:54.227757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.331 [2024-07-12 11:28:54.227831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.331 qpair failed and we were unable to recover it. 00:24:28.331 [2024-07-12 11:28:54.228114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.331 [2024-07-12 11:28:54.228165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.331 qpair failed and we were unable to recover it. 00:24:28.331 [2024-07-12 11:28:54.228427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.331 [2024-07-12 11:28:54.228490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.331 qpair failed and we were unable to recover it. 00:24:28.331 [2024-07-12 11:28:54.228740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.331 [2024-07-12 11:28:54.228794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.331 qpair failed and we were unable to recover it. 00:24:28.331 [2024-07-12 11:28:54.229041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.331 [2024-07-12 11:28:54.229106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.331 qpair failed and we were unable to recover it. 00:24:28.331 [2024-07-12 11:28:54.229417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.331 [2024-07-12 11:28:54.229486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.331 qpair failed and we were unable to recover it. 00:24:28.331 [2024-07-12 11:28:54.229784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.331 [2024-07-12 11:28:54.229845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.331 qpair failed and we were unable to recover it. 00:24:28.331 [2024-07-12 11:28:54.230096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.331 [2024-07-12 11:28:54.230151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.331 qpair failed and we were unable to recover it. 00:24:28.331 [2024-07-12 11:28:54.230413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.331 [2024-07-12 11:28:54.230464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.331 qpair failed and we were unable to recover it. 00:24:28.331 [2024-07-12 11:28:54.230671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.331 [2024-07-12 11:28:54.230714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.331 qpair failed and we were unable to recover it. 00:24:28.331 [2024-07-12 11:28:54.230885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.331 [2024-07-12 11:28:54.230953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.331 qpair failed and we were unable to recover it. 00:24:28.331 [2024-07-12 11:28:54.231254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.331 [2024-07-12 11:28:54.231319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.331 qpair failed and we were unable to recover it. 00:24:28.331 [2024-07-12 11:28:54.231576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.331 [2024-07-12 11:28:54.231642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.331 qpair failed and we were unable to recover it. 00:24:28.331 [2024-07-12 11:28:54.231901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.331 [2024-07-12 11:28:54.231957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.331 qpair failed and we were unable to recover it. 00:24:28.331 [2024-07-12 11:28:54.232226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.331 [2024-07-12 11:28:54.232277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.331 qpair failed and we were unable to recover it. 00:24:28.331 [2024-07-12 11:28:54.232519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.331 [2024-07-12 11:28:54.232561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.331 qpair failed and we were unable to recover it. 00:24:28.331 [2024-07-12 11:28:54.232790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.332 [2024-07-12 11:28:54.232840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.332 qpair failed and we were unable to recover it. 00:24:28.332 [2024-07-12 11:28:54.233050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.332 [2024-07-12 11:28:54.233128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.332 qpair failed and we were unable to recover it. 00:24:28.332 [2024-07-12 11:28:54.233370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.332 [2024-07-12 11:28:54.233433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.332 qpair failed and we were unable to recover it. 00:24:28.332 [2024-07-12 11:28:54.233728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.332 [2024-07-12 11:28:54.233778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.332 qpair failed and we were unable to recover it. 00:24:28.332 [2024-07-12 11:28:54.233970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.332 [2024-07-12 11:28:54.234023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.332 qpair failed and we were unable to recover it. 00:24:28.332 [2024-07-12 11:28:54.234258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.332 [2024-07-12 11:28:54.234317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.332 qpair failed and we were unable to recover it. 00:24:28.332 [2024-07-12 11:28:54.234491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.332 [2024-07-12 11:28:54.234534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.332 qpair failed and we were unable to recover it. 00:24:28.332 [2024-07-12 11:28:54.234767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.332 [2024-07-12 11:28:54.234817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.332 qpair failed and we were unable to recover it. 00:24:28.332 [2024-07-12 11:28:54.234993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.332 [2024-07-12 11:28:54.235045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.332 qpair failed and we were unable to recover it. 00:24:28.332 [2024-07-12 11:28:54.235261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.332 [2024-07-12 11:28:54.235314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.332 qpair failed and we were unable to recover it. 00:24:28.332 [2024-07-12 11:28:54.235468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.332 [2024-07-12 11:28:54.235510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.332 qpair failed and we were unable to recover it. 00:24:28.332 [2024-07-12 11:28:54.235637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.332 [2024-07-12 11:28:54.235680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.332 qpair failed and we were unable to recover it. 00:24:28.332 [2024-07-12 11:28:54.235807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.332 [2024-07-12 11:28:54.235849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.332 qpair failed and we were unable to recover it. 00:24:28.332 [2024-07-12 11:28:54.236089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.332 [2024-07-12 11:28:54.236143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.332 qpair failed and we were unable to recover it. 00:24:28.332 [2024-07-12 11:28:54.236412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.332 [2024-07-12 11:28:54.236476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.332 qpair failed and we were unable to recover it. 00:24:28.332 [2024-07-12 11:28:54.236733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.332 [2024-07-12 11:28:54.236796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.332 qpair failed and we were unable to recover it. 00:24:28.332 [2024-07-12 11:28:54.237086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.332 [2024-07-12 11:28:54.237168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.332 qpair failed and we were unable to recover it. 00:24:28.332 [2024-07-12 11:28:54.237374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.332 [2024-07-12 11:28:54.237417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.332 qpair failed and we were unable to recover it. 00:24:28.332 [2024-07-12 11:28:54.237554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.332 [2024-07-12 11:28:54.237634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.332 qpair failed and we were unable to recover it. 00:24:28.332 [2024-07-12 11:28:54.237922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.332 [2024-07-12 11:28:54.237975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.332 qpair failed and we were unable to recover it. 00:24:28.332 [2024-07-12 11:28:54.238211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.332 [2024-07-12 11:28:54.238265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.332 qpair failed and we were unable to recover it. 00:24:28.332 [2024-07-12 11:28:54.238446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.332 [2024-07-12 11:28:54.238513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.332 qpair failed and we were unable to recover it. 00:24:28.332 [2024-07-12 11:28:54.238708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.332 [2024-07-12 11:28:54.238783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.332 qpair failed and we were unable to recover it. 00:24:28.332 [2024-07-12 11:28:54.239137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.332 [2024-07-12 11:28:54.239207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.332 qpair failed and we were unable to recover it. 00:24:28.332 [2024-07-12 11:28:54.239508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.332 [2024-07-12 11:28:54.239571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.332 qpair failed and we were unable to recover it. 00:24:28.332 [2024-07-12 11:28:54.239838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.332 [2024-07-12 11:28:54.239905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.332 qpair failed and we were unable to recover it. 00:24:28.332 [2024-07-12 11:28:54.240118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.332 [2024-07-12 11:28:54.240171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.332 qpair failed and we were unable to recover it. 00:24:28.332 [2024-07-12 11:28:54.240404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.332 [2024-07-12 11:28:54.240466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.332 qpair failed and we were unable to recover it. 00:24:28.332 [2024-07-12 11:28:54.240771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.332 [2024-07-12 11:28:54.240844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.332 qpair failed and we were unable to recover it. 00:24:28.332 [2024-07-12 11:28:54.241139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.332 [2024-07-12 11:28:54.241181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.332 qpair failed and we were unable to recover it. 00:24:28.332 [2024-07-12 11:28:54.241370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.332 [2024-07-12 11:28:54.241433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.332 qpair failed and we were unable to recover it. 00:24:28.332 [2024-07-12 11:28:54.241689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.332 [2024-07-12 11:28:54.241755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.332 qpair failed and we were unable to recover it. 00:24:28.332 [2024-07-12 11:28:54.241974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.332 [2024-07-12 11:28:54.242041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.332 qpair failed and we were unable to recover it. 00:24:28.332 [2024-07-12 11:28:54.242302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.332 [2024-07-12 11:28:54.242354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.332 qpair failed and we were unable to recover it. 00:24:28.332 [2024-07-12 11:28:54.242589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.332 [2024-07-12 11:28:54.242652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.332 qpair failed and we were unable to recover it. 00:24:28.332 [2024-07-12 11:28:54.242966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.332 [2024-07-12 11:28:54.243029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.332 qpair failed and we were unable to recover it. 00:24:28.332 [2024-07-12 11:28:54.243341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.332 [2024-07-12 11:28:54.243384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.332 qpair failed and we were unable to recover it. 00:24:28.332 [2024-07-12 11:28:54.243555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.332 [2024-07-12 11:28:54.243619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.332 qpair failed and we were unable to recover it. 00:24:28.332 [2024-07-12 11:28:54.243818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.332 [2024-07-12 11:28:54.243907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.332 qpair failed and we were unable to recover it. 00:24:28.332 [2024-07-12 11:28:54.244147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.332 [2024-07-12 11:28:54.244193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.332 qpair failed and we were unable to recover it. 00:24:28.332 [2024-07-12 11:28:54.244324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.332 [2024-07-12 11:28:54.244368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.332 qpair failed and we were unable to recover it. 00:24:28.332 [2024-07-12 11:28:54.244615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.333 [2024-07-12 11:28:54.244679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.333 qpair failed and we were unable to recover it. 00:24:28.333 [2024-07-12 11:28:54.244975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.333 [2024-07-12 11:28:54.245039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.333 qpair failed and we were unable to recover it. 00:24:28.333 [2024-07-12 11:28:54.245309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.333 [2024-07-12 11:28:54.245352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.333 qpair failed and we were unable to recover it. 00:24:28.333 [2024-07-12 11:28:54.245485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.333 [2024-07-12 11:28:54.245528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.333 qpair failed and we were unable to recover it. 00:24:28.333 [2024-07-12 11:28:54.245701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.333 [2024-07-12 11:28:54.245746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.333 qpair failed and we were unable to recover it. 00:24:28.333 [2024-07-12 11:28:54.246026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.333 [2024-07-12 11:28:54.246079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.333 qpair failed and we were unable to recover it. 00:24:28.333 [2024-07-12 11:28:54.246260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.333 [2024-07-12 11:28:54.246334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.333 qpair failed and we were unable to recover it. 00:24:28.333 [2024-07-12 11:28:54.246638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.333 [2024-07-12 11:28:54.246690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.333 qpair failed and we were unable to recover it. 00:24:28.333 [2024-07-12 11:28:54.246981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.333 [2024-07-12 11:28:54.247032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.333 qpair failed and we were unable to recover it. 00:24:28.333 [2024-07-12 11:28:54.247271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.333 [2024-07-12 11:28:54.247334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.333 qpair failed and we were unable to recover it. 00:24:28.333 [2024-07-12 11:28:54.247628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.333 [2024-07-12 11:28:54.247670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.333 qpair failed and we were unable to recover it. 00:24:28.333 [2024-07-12 11:28:54.247907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.333 [2024-07-12 11:28:54.247971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.333 qpair failed and we were unable to recover it. 00:24:28.333 [2024-07-12 11:28:54.248220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.333 [2024-07-12 11:28:54.248296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.333 qpair failed and we were unable to recover it. 00:24:28.333 [2024-07-12 11:28:54.248535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.333 [2024-07-12 11:28:54.248602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.333 qpair failed and we were unable to recover it. 00:24:28.333 [2024-07-12 11:28:54.248824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.333 [2024-07-12 11:28:54.248890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.333 qpair failed and we were unable to recover it. 00:24:28.333 [2024-07-12 11:28:54.249175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.333 [2024-07-12 11:28:54.249226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.333 qpair failed and we were unable to recover it. 00:24:28.333 [2024-07-12 11:28:54.249428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.333 [2024-07-12 11:28:54.249495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.333 qpair failed and we were unable to recover it. 00:24:28.333 [2024-07-12 11:28:54.249760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.333 [2024-07-12 11:28:54.249825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.333 qpair failed and we were unable to recover it. 00:24:28.333 [2024-07-12 11:28:54.250146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.333 [2024-07-12 11:28:54.250196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.333 qpair failed and we were unable to recover it. 00:24:28.333 [2024-07-12 11:28:54.250466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.333 [2024-07-12 11:28:54.250509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.333 qpair failed and we were unable to recover it. 00:24:28.333 [2024-07-12 11:28:54.250650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.333 [2024-07-12 11:28:54.250694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.333 qpair failed and we were unable to recover it. 00:24:28.333 [2024-07-12 11:28:54.250957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.333 [2024-07-12 11:28:54.251023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.333 qpair failed and we were unable to recover it. 00:24:28.333 [2024-07-12 11:28:54.251332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.333 [2024-07-12 11:28:54.251375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.333 qpair failed and we were unable to recover it. 00:24:28.333 [2024-07-12 11:28:54.251610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.333 [2024-07-12 11:28:54.251674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.333 qpair failed and we were unable to recover it. 00:24:28.333 [2024-07-12 11:28:54.251996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.333 [2024-07-12 11:28:54.252053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.333 qpair failed and we were unable to recover it. 00:24:28.333 [2024-07-12 11:28:54.252287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.333 [2024-07-12 11:28:54.252351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.333 qpair failed and we were unable to recover it. 00:24:28.333 [2024-07-12 11:28:54.252639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.333 [2024-07-12 11:28:54.252702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.333 qpair failed and we were unable to recover it. 00:24:28.333 [2024-07-12 11:28:54.252970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.333 [2024-07-12 11:28:54.253021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.333 qpair failed and we were unable to recover it. 00:24:28.333 [2024-07-12 11:28:54.253291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.333 [2024-07-12 11:28:54.253354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.333 qpair failed and we were unable to recover it. 00:24:28.333 [2024-07-12 11:28:54.253633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.333 [2024-07-12 11:28:54.253696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.333 qpair failed and we were unable to recover it. 00:24:28.333 [2024-07-12 11:28:54.253995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.333 [2024-07-12 11:28:54.254046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.333 qpair failed and we were unable to recover it. 00:24:28.333 [2024-07-12 11:28:54.254248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.333 [2024-07-12 11:28:54.254300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.333 qpair failed and we were unable to recover it. 00:24:28.333 [2024-07-12 11:28:54.254527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.333 [2024-07-12 11:28:54.254570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.333 qpair failed and we were unable to recover it. 00:24:28.333 [2024-07-12 11:28:54.254812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.333 [2024-07-12 11:28:54.254893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.333 qpair failed and we were unable to recover it. 00:24:28.333 [2024-07-12 11:28:54.255204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.333 [2024-07-12 11:28:54.255278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.333 qpair failed and we were unable to recover it. 00:24:28.333 [2024-07-12 11:28:54.255538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.334 [2024-07-12 11:28:54.255581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.334 qpair failed and we were unable to recover it. 00:24:28.334 [2024-07-12 11:28:54.255785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.334 [2024-07-12 11:28:54.255852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.334 qpair failed and we were unable to recover it. 00:24:28.334 [2024-07-12 11:28:54.256135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.334 [2024-07-12 11:28:54.256177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.334 qpair failed and we were unable to recover it. 00:24:28.334 [2024-07-12 11:28:54.256380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.334 [2024-07-12 11:28:54.256445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.334 qpair failed and we were unable to recover it. 00:24:28.334 [2024-07-12 11:28:54.256683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.334 [2024-07-12 11:28:54.256761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.334 qpair failed and we were unable to recover it. 00:24:28.334 [2024-07-12 11:28:54.257040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.334 [2024-07-12 11:28:54.257105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.334 qpair failed and we were unable to recover it. 00:24:28.334 [2024-07-12 11:28:54.257393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.334 [2024-07-12 11:28:54.257456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.334 qpair failed and we were unable to recover it. 00:24:28.334 [2024-07-12 11:28:54.257736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.334 [2024-07-12 11:28:54.257799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.334 qpair failed and we were unable to recover it. 00:24:28.334 [2024-07-12 11:28:54.258085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.334 [2024-07-12 11:28:54.258152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.334 qpair failed and we were unable to recover it. 00:24:28.334 [2024-07-12 11:28:54.258409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.334 [2024-07-12 11:28:54.258473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.334 qpair failed and we were unable to recover it. 00:24:28.334 [2024-07-12 11:28:54.258765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.334 [2024-07-12 11:28:54.258815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.334 qpair failed and we were unable to recover it. 00:24:28.334 [2024-07-12 11:28:54.259065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.334 [2024-07-12 11:28:54.259130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.334 qpair failed and we were unable to recover it. 00:24:28.334 [2024-07-12 11:28:54.259397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.334 [2024-07-12 11:28:54.259449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.334 qpair failed and we were unable to recover it. 00:24:28.334 [2024-07-12 11:28:54.259690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.334 [2024-07-12 11:28:54.259763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.334 qpair failed and we were unable to recover it. 00:24:28.334 [2024-07-12 11:28:54.260025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.334 [2024-07-12 11:28:54.260070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.334 qpair failed and we were unable to recover it. 00:24:28.334 [2024-07-12 11:28:54.260197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.334 [2024-07-12 11:28:54.260240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.334 qpair failed and we were unable to recover it. 00:24:28.334 [2024-07-12 11:28:54.260399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.334 [2024-07-12 11:28:54.260442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.334 qpair failed and we were unable to recover it. 00:24:28.334 [2024-07-12 11:28:54.260565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.334 [2024-07-12 11:28:54.260609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.334 qpair failed and we were unable to recover it. 00:24:28.334 [2024-07-12 11:28:54.260792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.334 [2024-07-12 11:28:54.260884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.334 qpair failed and we were unable to recover it. 00:24:28.334 [2024-07-12 11:28:54.261152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.334 [2024-07-12 11:28:54.261215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.334 qpair failed and we were unable to recover it. 00:24:28.334 [2024-07-12 11:28:54.261509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.334 [2024-07-12 11:28:54.261583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.334 qpair failed and we were unable to recover it. 00:24:28.334 [2024-07-12 11:28:54.261849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.334 [2024-07-12 11:28:54.261941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.334 qpair failed and we were unable to recover it. 00:24:28.334 [2024-07-12 11:28:54.262202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.334 [2024-07-12 11:28:54.262273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.334 qpair failed and we were unable to recover it. 00:24:28.334 [2024-07-12 11:28:54.262519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.334 [2024-07-12 11:28:54.262584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.334 qpair failed and we were unable to recover it. 00:24:28.334 [2024-07-12 11:28:54.262813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.334 [2024-07-12 11:28:54.262883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.334 qpair failed and we were unable to recover it. 00:24:28.334 [2024-07-12 11:28:54.263130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.334 [2024-07-12 11:28:54.263194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.334 qpair failed and we were unable to recover it. 00:24:28.334 [2024-07-12 11:28:54.263488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.334 [2024-07-12 11:28:54.263530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.334 qpair failed and we were unable to recover it. 00:24:28.334 [2024-07-12 11:28:54.263700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.334 [2024-07-12 11:28:54.263743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.334 qpair failed and we were unable to recover it. 00:24:28.334 [2024-07-12 11:28:54.263941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.334 [2024-07-12 11:28:54.264008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.334 qpair failed and we were unable to recover it. 00:24:28.334 [2024-07-12 11:28:54.264264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.334 [2024-07-12 11:28:54.264332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.334 qpair failed and we were unable to recover it. 00:24:28.334 [2024-07-12 11:28:54.264602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.334 [2024-07-12 11:28:54.264666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.334 qpair failed and we were unable to recover it. 00:24:28.334 [2024-07-12 11:28:54.264894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.334 [2024-07-12 11:28:54.264958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.334 qpair failed and we were unable to recover it. 00:24:28.334 [2024-07-12 11:28:54.265257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.334 [2024-07-12 11:28:54.265300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.334 qpair failed and we were unable to recover it. 00:24:28.334 [2024-07-12 11:28:54.265552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.334 [2024-07-12 11:28:54.265616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.334 qpair failed and we were unable to recover it. 00:24:28.334 [2024-07-12 11:28:54.265906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.334 [2024-07-12 11:28:54.265971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.334 qpair failed and we were unable to recover it. 00:24:28.334 [2024-07-12 11:28:54.266273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.334 [2024-07-12 11:28:54.266336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.334 qpair failed and we were unable to recover it. 00:24:28.334 [2024-07-12 11:28:54.266590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.334 [2024-07-12 11:28:54.266652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.334 qpair failed and we were unable to recover it. 00:24:28.334 [2024-07-12 11:28:54.266918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.334 [2024-07-12 11:28:54.266961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.334 qpair failed and we were unable to recover it. 00:24:28.334 [2024-07-12 11:28:54.267139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.334 [2024-07-12 11:28:54.267181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.334 qpair failed and we were unable to recover it. 00:24:28.334 [2024-07-12 11:28:54.267384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.334 [2024-07-12 11:28:54.267446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.334 qpair failed and we were unable to recover it. 00:24:28.334 [2024-07-12 11:28:54.267704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.335 [2024-07-12 11:28:54.267770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.335 qpair failed and we were unable to recover it. 00:24:28.335 [2024-07-12 11:28:54.268083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.335 [2024-07-12 11:28:54.268151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.335 qpair failed and we were unable to recover it. 00:24:28.335 [2024-07-12 11:28:54.268417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.335 [2024-07-12 11:28:54.268459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.335 qpair failed and we were unable to recover it. 00:24:28.335 [2024-07-12 11:28:54.268631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.335 [2024-07-12 11:28:54.268674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.335 qpair failed and we were unable to recover it. 00:24:28.335 [2024-07-12 11:28:54.268912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.335 [2024-07-12 11:28:54.268978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.335 qpair failed and we were unable to recover it. 00:24:28.335 [2024-07-12 11:28:54.269280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.335 [2024-07-12 11:28:54.269353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.335 qpair failed and we were unable to recover it. 00:24:28.335 [2024-07-12 11:28:54.269663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.335 [2024-07-12 11:28:54.269727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.335 qpair failed and we were unable to recover it. 00:24:28.335 [2024-07-12 11:28:54.269974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.335 [2024-07-12 11:28:54.270017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.335 qpair failed and we were unable to recover it. 00:24:28.335 [2024-07-12 11:28:54.270243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.335 [2024-07-12 11:28:54.270307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.335 qpair failed and we were unable to recover it. 00:24:28.335 [2024-07-12 11:28:54.270601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.335 [2024-07-12 11:28:54.270644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.335 qpair failed and we were unable to recover it. 00:24:28.335 [2024-07-12 11:28:54.270893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.335 [2024-07-12 11:28:54.270957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.335 qpair failed and we were unable to recover it. 00:24:28.335 [2024-07-12 11:28:54.271211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.335 [2024-07-12 11:28:54.271254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.335 qpair failed and we were unable to recover it. 00:24:28.335 [2024-07-12 11:28:54.271400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.335 [2024-07-12 11:28:54.271445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.335 qpair failed and we were unable to recover it. 00:24:28.335 [2024-07-12 11:28:54.271662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.335 [2024-07-12 11:28:54.271736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.335 qpair failed and we were unable to recover it. 00:24:28.335 [2024-07-12 11:28:54.272043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.335 [2024-07-12 11:28:54.272114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.335 qpair failed and we were unable to recover it. 00:24:28.335 [2024-07-12 11:28:54.272412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.335 [2024-07-12 11:28:54.272476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.335 qpair failed and we were unable to recover it. 00:24:28.335 [2024-07-12 11:28:54.272770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.335 [2024-07-12 11:28:54.272813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.335 qpair failed and we were unable to recover it. 00:24:28.335 [2024-07-12 11:28:54.273071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.335 [2024-07-12 11:28:54.273135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.335 qpair failed and we were unable to recover it. 00:24:28.335 [2024-07-12 11:28:54.273386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.335 [2024-07-12 11:28:54.273449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.335 qpair failed and we were unable to recover it. 00:24:28.335 [2024-07-12 11:28:54.273746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.335 [2024-07-12 11:28:54.273818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.335 qpair failed and we were unable to recover it. 00:24:28.335 [2024-07-12 11:28:54.274106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.335 [2024-07-12 11:28:54.274171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.335 qpair failed and we were unable to recover it. 00:24:28.335 [2024-07-12 11:28:54.274482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.335 [2024-07-12 11:28:54.274551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.335 qpair failed and we were unable to recover it. 00:24:28.335 [2024-07-12 11:28:54.274774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.335 [2024-07-12 11:28:54.274836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.335 qpair failed and we were unable to recover it. 00:24:28.335 [2024-07-12 11:28:54.275104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.335 [2024-07-12 11:28:54.275170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.335 qpair failed and we were unable to recover it. 00:24:28.335 [2024-07-12 11:28:54.275428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.335 [2024-07-12 11:28:54.275472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.335 qpair failed and we were unable to recover it. 00:24:28.335 [2024-07-12 11:28:54.275709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.335 [2024-07-12 11:28:54.275772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.335 qpair failed and we were unable to recover it. 00:24:28.335 [2024-07-12 11:28:54.276102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.335 [2024-07-12 11:28:54.276146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.335 qpair failed and we were unable to recover it. 00:24:28.335 [2024-07-12 11:28:54.276361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.335 [2024-07-12 11:28:54.276403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.335 qpair failed and we were unable to recover it. 00:24:28.335 [2024-07-12 11:28:54.276565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.335 [2024-07-12 11:28:54.276644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.335 qpair failed and we were unable to recover it. 00:24:28.335 [2024-07-12 11:28:54.276936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.335 [2024-07-12 11:28:54.277002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.335 qpair failed and we were unable to recover it. 00:24:28.335 [2024-07-12 11:28:54.277301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.335 [2024-07-12 11:28:54.277364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.335 qpair failed and we were unable to recover it. 00:24:28.335 [2024-07-12 11:28:54.277665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.335 [2024-07-12 11:28:54.277739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.335 qpair failed and we were unable to recover it. 00:24:28.335 [2024-07-12 11:28:54.277992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.335 [2024-07-12 11:28:54.278060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.335 qpair failed and we were unable to recover it. 00:24:28.335 [2024-07-12 11:28:54.278318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.335 [2024-07-12 11:28:54.278381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.335 qpair failed and we were unable to recover it. 00:24:28.335 [2024-07-12 11:28:54.278649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.335 [2024-07-12 11:28:54.278712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.335 qpair failed and we were unable to recover it. 00:24:28.335 [2024-07-12 11:28:54.279015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.335 [2024-07-12 11:28:54.279081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.335 qpair failed and we were unable to recover it. 00:24:28.335 [2024-07-12 11:28:54.279333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.335 [2024-07-12 11:28:54.279396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.335 qpair failed and we were unable to recover it. 00:24:28.335 [2024-07-12 11:28:54.279655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.335 [2024-07-12 11:28:54.279718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.335 qpair failed and we were unable to recover it. 00:24:28.335 [2024-07-12 11:28:54.279966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.335 [2024-07-12 11:28:54.280030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.335 qpair failed and we were unable to recover it. 00:24:28.335 [2024-07-12 11:28:54.280322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.335 [2024-07-12 11:28:54.280385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.335 qpair failed and we were unable to recover it. 00:24:28.335 [2024-07-12 11:28:54.280693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.335 [2024-07-12 11:28:54.280756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.336 qpair failed and we were unable to recover it. 00:24:28.336 [2024-07-12 11:28:54.281062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.336 [2024-07-12 11:28:54.281135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.336 qpair failed and we were unable to recover it. 00:24:28.336 [2024-07-12 11:28:54.281407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.336 [2024-07-12 11:28:54.281470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.336 qpair failed and we were unable to recover it. 00:24:28.336 [2024-07-12 11:28:54.281715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.336 [2024-07-12 11:28:54.281781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.336 qpair failed and we were unable to recover it. 00:24:28.336 [2024-07-12 11:28:54.282098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.336 [2024-07-12 11:28:54.282164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.336 qpair failed and we were unable to recover it. 00:24:28.336 [2024-07-12 11:28:54.282425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.336 [2024-07-12 11:28:54.282492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.336 qpair failed and we were unable to recover it. 00:24:28.336 [2024-07-12 11:28:54.282690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.336 [2024-07-12 11:28:54.282753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.336 qpair failed and we were unable to recover it. 00:24:28.336 [2024-07-12 11:28:54.283014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.336 [2024-07-12 11:28:54.283079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.336 qpair failed and we were unable to recover it. 00:24:28.336 [2024-07-12 11:28:54.283321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.336 [2024-07-12 11:28:54.283385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.336 qpair failed and we were unable to recover it. 00:24:28.336 [2024-07-12 11:28:54.283657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.336 [2024-07-12 11:28:54.283720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.336 qpair failed and we were unable to recover it. 00:24:28.336 [2024-07-12 11:28:54.283953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.336 [2024-07-12 11:28:54.284017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.336 qpair failed and we were unable to recover it. 00:24:28.336 [2024-07-12 11:28:54.284164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.336 [2024-07-12 11:28:54.284206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.336 qpair failed and we were unable to recover it. 00:24:28.336 [2024-07-12 11:28:54.284384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.336 [2024-07-12 11:28:54.284426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.336 qpair failed and we were unable to recover it. 00:24:28.336 [2024-07-12 11:28:54.284698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.336 [2024-07-12 11:28:54.284747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.336 qpair failed and we were unable to recover it. 00:24:28.336 [2024-07-12 11:28:54.284912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.336 [2024-07-12 11:28:54.284986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.336 qpair failed and we were unable to recover it. 00:24:28.336 [2024-07-12 11:28:54.285173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.336 [2024-07-12 11:28:54.285238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.336 qpair failed and we were unable to recover it. 00:24:28.336 [2024-07-12 11:28:54.285398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.336 [2024-07-12 11:28:54.285441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.336 qpair failed and we were unable to recover it. 00:24:28.336 [2024-07-12 11:28:54.285726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.336 [2024-07-12 11:28:54.285789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.336 qpair failed and we were unable to recover it. 00:24:28.336 [2024-07-12 11:28:54.286103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.336 [2024-07-12 11:28:54.286175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.336 qpair failed and we were unable to recover it. 00:24:28.336 [2024-07-12 11:28:54.286425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.336 [2024-07-12 11:28:54.286488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.336 qpair failed and we were unable to recover it. 00:24:28.336 [2024-07-12 11:28:54.286784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.336 [2024-07-12 11:28:54.286848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.336 qpair failed and we were unable to recover it. 00:24:28.336 [2024-07-12 11:28:54.287114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.336 [2024-07-12 11:28:54.287177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.336 qpair failed and we were unable to recover it. 00:24:28.336 [2024-07-12 11:28:54.287467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.336 [2024-07-12 11:28:54.287529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.336 qpair failed and we were unable to recover it. 00:24:28.336 [2024-07-12 11:28:54.287808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.336 [2024-07-12 11:28:54.287898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.336 qpair failed and we were unable to recover it. 00:24:28.336 [2024-07-12 11:28:54.288157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.336 [2024-07-12 11:28:54.288221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.336 qpair failed and we were unable to recover it. 00:24:28.336 [2024-07-12 11:28:54.288476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.336 [2024-07-12 11:28:54.288539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.336 qpair failed and we were unable to recover it. 00:24:28.336 [2024-07-12 11:28:54.288780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.336 [2024-07-12 11:28:54.288823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.336 qpair failed and we were unable to recover it. 00:24:28.336 [2024-07-12 11:28:54.289026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.336 [2024-07-12 11:28:54.289099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.336 qpair failed and we were unable to recover it. 00:24:28.336 [2024-07-12 11:28:54.289319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.336 [2024-07-12 11:28:54.289381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.336 qpair failed and we were unable to recover it. 00:24:28.336 [2024-07-12 11:28:54.289555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.336 [2024-07-12 11:28:54.289617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.336 qpair failed and we were unable to recover it. 00:24:28.336 [2024-07-12 11:28:54.289905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.336 [2024-07-12 11:28:54.289949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.336 qpair failed and we were unable to recover it. 00:24:28.336 [2024-07-12 11:28:54.290124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.336 [2024-07-12 11:28:54.290193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.336 qpair failed and we were unable to recover it. 00:24:28.336 [2024-07-12 11:28:54.290431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.336 [2024-07-12 11:28:54.290496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.336 qpair failed and we were unable to recover it. 00:24:28.336 [2024-07-12 11:28:54.290764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.336 [2024-07-12 11:28:54.290828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.336 qpair failed and we were unable to recover it. 00:24:28.336 [2024-07-12 11:28:54.291055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.336 [2024-07-12 11:28:54.291119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.336 qpair failed and we were unable to recover it. 00:24:28.336 [2024-07-12 11:28:54.291408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.336 [2024-07-12 11:28:54.291472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.336 qpair failed and we were unable to recover it. 00:24:28.336 [2024-07-12 11:28:54.291772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.336 [2024-07-12 11:28:54.291835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.336 qpair failed and we were unable to recover it. 00:24:28.336 [2024-07-12 11:28:54.292086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.336 [2024-07-12 11:28:54.292152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.336 qpair failed and we were unable to recover it. 00:24:28.336 [2024-07-12 11:28:54.292446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.336 [2024-07-12 11:28:54.292510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.336 qpair failed and we were unable to recover it. 00:24:28.336 [2024-07-12 11:28:54.292800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.336 [2024-07-12 11:28:54.292864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.336 qpair failed and we were unable to recover it. 00:24:28.336 [2024-07-12 11:28:54.293186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.336 [2024-07-12 11:28:54.293249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.336 qpair failed and we were unable to recover it. 00:24:28.337 [2024-07-12 11:28:54.293504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.337 [2024-07-12 11:28:54.293546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.337 qpair failed and we were unable to recover it. 00:24:28.337 [2024-07-12 11:28:54.293720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.337 [2024-07-12 11:28:54.293798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.337 qpair failed and we were unable to recover it. 00:24:28.337 [2024-07-12 11:28:54.294121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.337 [2024-07-12 11:28:54.294190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.337 qpair failed and we were unable to recover it. 00:24:28.337 [2024-07-12 11:28:54.294447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.337 [2024-07-12 11:28:54.294510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.337 qpair failed and we were unable to recover it. 00:24:28.337 [2024-07-12 11:28:54.294788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.337 [2024-07-12 11:28:54.294829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.337 qpair failed and we were unable to recover it. 00:24:28.337 [2024-07-12 11:28:54.295043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.337 [2024-07-12 11:28:54.295107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.337 qpair failed and we were unable to recover it. 00:24:28.337 [2024-07-12 11:28:54.295313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.337 [2024-07-12 11:28:54.295377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.337 qpair failed and we were unable to recover it. 00:24:28.337 [2024-07-12 11:28:54.295615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.337 [2024-07-12 11:28:54.295678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.337 qpair failed and we were unable to recover it. 00:24:28.337 [2024-07-12 11:28:54.295977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.337 [2024-07-12 11:28:54.296021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.337 qpair failed and we were unable to recover it. 00:24:28.337 [2024-07-12 11:28:54.296165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.337 [2024-07-12 11:28:54.296209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.337 qpair failed and we were unable to recover it. 00:24:28.337 [2024-07-12 11:28:54.296517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.337 [2024-07-12 11:28:54.296579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.337 qpair failed and we were unable to recover it. 00:24:28.337 [2024-07-12 11:28:54.296891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.337 [2024-07-12 11:28:54.296956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.337 qpair failed and we were unable to recover it. 00:24:28.337 [2024-07-12 11:28:54.297215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.337 [2024-07-12 11:28:54.297286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.337 qpair failed and we were unable to recover it. 00:24:28.337 [2024-07-12 11:28:54.297513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.337 [2024-07-12 11:28:54.297575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.337 qpair failed and we were unable to recover it. 00:24:28.337 [2024-07-12 11:28:54.297878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.337 [2024-07-12 11:28:54.297951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.337 qpair failed and we were unable to recover it. 00:24:28.337 [2024-07-12 11:28:54.298207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.337 [2024-07-12 11:28:54.298270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.337 qpair failed and we were unable to recover it. 00:24:28.337 [2024-07-12 11:28:54.298510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.337 [2024-07-12 11:28:54.298573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.337 qpair failed and we were unable to recover it. 00:24:28.337 [2024-07-12 11:28:54.298856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.337 [2024-07-12 11:28:54.298941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.337 qpair failed and we were unable to recover it. 00:24:28.337 [2024-07-12 11:28:54.299197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.337 [2024-07-12 11:28:54.299259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.337 qpair failed and we were unable to recover it. 00:24:28.337 [2024-07-12 11:28:54.299519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.337 [2024-07-12 11:28:54.299583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.337 qpair failed and we were unable to recover it. 00:24:28.337 [2024-07-12 11:28:54.299901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.337 [2024-07-12 11:28:54.299946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.337 qpair failed and we were unable to recover it. 00:24:28.337 [2024-07-12 11:28:54.300113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.337 [2024-07-12 11:28:54.300156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.337 qpair failed and we were unable to recover it. 00:24:28.337 [2024-07-12 11:28:54.300360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.337 [2024-07-12 11:28:54.300430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.337 qpair failed and we were unable to recover it. 00:24:28.337 [2024-07-12 11:28:54.300720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.337 [2024-07-12 11:28:54.300784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.337 qpair failed and we were unable to recover it. 00:24:28.337 [2024-07-12 11:28:54.301089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.337 [2024-07-12 11:28:54.301155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.337 qpair failed and we were unable to recover it. 00:24:28.337 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 680666 Killed "${NVMF_APP[@]}" "$@" 00:24:28.337 [2024-07-12 11:28:54.301380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.337 [2024-07-12 11:28:54.301454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.337 qpair failed and we were unable to recover it. 00:24:28.337 [2024-07-12 11:28:54.301717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.337 [2024-07-12 11:28:54.301780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.337 qpair failed and we were unable to recover it. 00:24:28.337 [2024-07-12 11:28:54.302032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.337 11:28:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:24:28.337 [2024-07-12 11:28:54.302097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.337 qpair failed and we were unable to recover it. 00:24:28.337 [2024-07-12 11:28:54.302397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.337 11:28:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:28.337 [2024-07-12 11:28:54.302460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.337 qpair failed and we were unable to recover it. 00:24:28.337 11:28:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:28.337 [2024-07-12 11:28:54.302751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.337 [2024-07-12 11:28:54.302819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.337 qpair failed and we were unable to recover it. 00:24:28.337 11:28:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:28.337 [2024-07-12 11:28:54.303094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.337 [2024-07-12 11:28:54.303159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.337 qpair failed and we were unable to recover it. 00:24:28.337 11:28:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:28.337 [2024-07-12 11:28:54.303450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.337 [2024-07-12 11:28:54.303513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.337 qpair failed and we were unable to recover it. 00:24:28.337 [2024-07-12 11:28:54.303759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.337 [2024-07-12 11:28:54.303821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.337 qpair failed and we were unable to recover it. 00:24:28.338 [2024-07-12 11:28:54.304076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.338 [2024-07-12 11:28:54.304140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.338 qpair failed and we were unable to recover it. 00:24:28.338 [2024-07-12 11:28:54.304385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.338 [2024-07-12 11:28:54.304449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.338 qpair failed and we were unable to recover it. 00:24:28.338 [2024-07-12 11:28:54.304738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.338 [2024-07-12 11:28:54.304802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.338 qpair failed and we were unable to recover it. 00:24:28.338 [2024-07-12 11:28:54.305036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.338 [2024-07-12 11:28:54.305070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.338 qpair failed and we were unable to recover it. 00:24:28.338 [2024-07-12 11:28:54.305249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.338 [2024-07-12 11:28:54.305283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.338 qpair failed and we were unable to recover it. 00:24:28.338 [2024-07-12 11:28:54.305577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.338 [2024-07-12 11:28:54.305641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.338 qpair failed and we were unable to recover it. 00:24:28.338 [2024-07-12 11:28:54.305833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.338 [2024-07-12 11:28:54.305882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.338 qpair failed and we were unable to recover it. 00:24:28.338 [2024-07-12 11:28:54.306006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.338 [2024-07-12 11:28:54.306038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.338 qpair failed and we were unable to recover it. 00:24:28.338 [2024-07-12 11:28:54.306184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.338 [2024-07-12 11:28:54.306218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.338 qpair failed and we were unable to recover it. 00:24:28.338 [2024-07-12 11:28:54.306358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.338 [2024-07-12 11:28:54.306392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.338 qpair failed and we were unable to recover it. 00:24:28.338 [2024-07-12 11:28:54.306492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.338 [2024-07-12 11:28:54.306526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.338 qpair failed and we were unable to recover it. 00:24:28.338 [2024-07-12 11:28:54.306724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.338 [2024-07-12 11:28:54.306787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.338 qpair failed and we were unable to recover it. 00:24:28.338 [2024-07-12 11:28:54.307006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.338 [2024-07-12 11:28:54.307050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.338 qpair failed and we were unable to recover it. 00:24:28.338 [2024-07-12 11:28:54.307164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.338 [2024-07-12 11:28:54.307199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.338 qpair failed and we were unable to recover it. 00:24:28.338 [2024-07-12 11:28:54.307364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.338 [2024-07-12 11:28:54.307403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.338 qpair failed and we were unable to recover it. 00:24:28.338 [2024-07-12 11:28:54.307619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.338 [2024-07-12 11:28:54.307683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.338 qpair failed and we were unable to recover it. 00:24:28.338 [2024-07-12 11:28:54.307949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.338 [2024-07-12 11:28:54.307984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.338 qpair failed and we were unable to recover it. 00:24:28.338 [2024-07-12 11:28:54.308174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.338 [2024-07-12 11:28:54.308245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.338 qpair failed and we were unable to recover it. 00:24:28.338 [2024-07-12 11:28:54.308490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.338 [2024-07-12 11:28:54.308555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.338 qpair failed and we were unable to recover it. 00:24:28.338 [2024-07-12 11:28:54.308756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.338 [2024-07-12 11:28:54.308789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.338 qpair failed and we were unable to recover it. 00:24:28.338 11:28:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=681201 00:24:28.338 11:28:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:28.338 11:28:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 681201 00:24:28.338 [2024-07-12 11:28:54.308961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.338 [2024-07-12 11:28:54.308995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.338 qpair failed and we were unable to recover it. 00:24:28.338 [2024-07-12 11:28:54.309135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.338 11:28:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 681201 ']' 00:24:28.338 [2024-07-12 11:28:54.309171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.338 qpair failed and we were unable to recover it. 00:24:28.338 [2024-07-12 11:28:54.309268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.338 11:28:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.338 [2024-07-12 11:28:54.309302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.338 qpair failed and we were unable to recover it. 00:24:28.338 11:28:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:28.338 [2024-07-12 11:28:54.309447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.338 [2024-07-12 11:28:54.309481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.338 11:28:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.338 qpair failed and we were unable to recover it. 00:24:28.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.338 11:28:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:28.338 11:28:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:28.338 [2024-07-12 11:28:54.309715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.338 [2024-07-12 11:28:54.309792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.338 qpair failed and we were unable to recover it. 00:24:28.338 [2024-07-12 11:28:54.310452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.338 [2024-07-12 11:28:54.310509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.338 qpair failed and we were unable to recover it. 00:24:28.338 [2024-07-12 11:28:54.310756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.338 [2024-07-12 11:28:54.310813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.338 qpair failed and we were unable to recover it. 00:24:28.338 [2024-07-12 11:28:54.310986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.338 [2024-07-12 11:28:54.311022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.338 qpair failed and we were unable to recover it. 00:24:28.338 [2024-07-12 11:28:54.311170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.338 [2024-07-12 11:28:54.311204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.338 qpair failed and we were unable to recover it. 00:24:28.338 [2024-07-12 11:28:54.311321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.338 [2024-07-12 11:28:54.311355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.338 qpair failed and we were unable to recover it. 00:24:28.338 [2024-07-12 11:28:54.311460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.338 [2024-07-12 11:28:54.311495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.338 qpair failed and we were unable to recover it. 00:24:28.338 [2024-07-12 11:28:54.311600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.338 [2024-07-12 11:28:54.311633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.338 qpair failed and we were unable to recover it. 00:24:28.338 [2024-07-12 11:28:54.311740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.338 [2024-07-12 11:28:54.311773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.338 qpair failed and we were unable to recover it. 00:24:28.338 [2024-07-12 11:28:54.311916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.338 [2024-07-12 11:28:54.311952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.338 qpair failed and we were unable to recover it. 00:24:28.338 [2024-07-12 11:28:54.312081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.338 [2024-07-12 11:28:54.312134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.338 qpair failed and we were unable to recover it. 00:24:28.338 [2024-07-12 11:28:54.312333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.338 [2024-07-12 11:28:54.312385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.338 qpair failed and we were unable to recover it. 00:24:28.339 [2024-07-12 11:28:54.312587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.339 [2024-07-12 11:28:54.312639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.339 qpair failed and we were unable to recover it. 00:24:28.339 [2024-07-12 11:28:54.312834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.339 [2024-07-12 11:28:54.312881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.339 qpair failed and we were unable to recover it. 00:24:28.339 [2024-07-12 11:28:54.313029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.339 [2024-07-12 11:28:54.313064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.339 qpair failed and we were unable to recover it. 00:24:28.339 [2024-07-12 11:28:54.313259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.339 [2024-07-12 11:28:54.313312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.339 qpair failed and we were unable to recover it. 00:24:28.339 [2024-07-12 11:28:54.313504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.339 [2024-07-12 11:28:54.313555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.339 qpair failed and we were unable to recover it. 00:24:28.339 [2024-07-12 11:28:54.313751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.339 [2024-07-12 11:28:54.313785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.339 qpair failed and we were unable to recover it. 00:24:28.339 [2024-07-12 11:28:54.313917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.339 [2024-07-12 11:28:54.313952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.339 qpair failed and we were unable to recover it. 00:24:28.339 [2024-07-12 11:28:54.314057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.339 [2024-07-12 11:28:54.314091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.339 qpair failed and we were unable to recover it. 00:24:28.339 [2024-07-12 11:28:54.314304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.339 [2024-07-12 11:28:54.314368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.339 qpair failed and we were unable to recover it. 00:24:28.339 [2024-07-12 11:28:54.314676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.339 [2024-07-12 11:28:54.314754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.339 qpair failed and we were unable to recover it. 00:24:28.339 [2024-07-12 11:28:54.314971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.339 [2024-07-12 11:28:54.315005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.339 qpair failed and we were unable to recover it. 00:24:28.339 [2024-07-12 11:28:54.315223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.339 [2024-07-12 11:28:54.315288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.339 qpair failed and we were unable to recover it. 00:24:28.339 [2024-07-12 11:28:54.315578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.339 [2024-07-12 11:28:54.315612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.339 qpair failed and we were unable to recover it. 00:24:28.339 [2024-07-12 11:28:54.315757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.339 [2024-07-12 11:28:54.315791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.339 qpair failed and we were unable to recover it. 00:24:28.339 [2024-07-12 11:28:54.315920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.339 [2024-07-12 11:28:54.315955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.339 qpair failed and we were unable to recover it. 00:24:28.339 [2024-07-12 11:28:54.316099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.339 [2024-07-12 11:28:54.316133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.339 qpair failed and we were unable to recover it. 00:24:28.339 [2024-07-12 11:28:54.316306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.339 [2024-07-12 11:28:54.316398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.339 qpair failed and we were unable to recover it. 00:24:28.339 [2024-07-12 11:28:54.316649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.339 [2024-07-12 11:28:54.316714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.339 qpair failed and we were unable to recover it. 00:24:28.339 [2024-07-12 11:28:54.316954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.339 [2024-07-12 11:28:54.316989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.339 qpair failed and we were unable to recover it. 00:24:28.339 [2024-07-12 11:28:54.317106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.339 [2024-07-12 11:28:54.317140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.339 qpair failed and we were unable to recover it. 00:24:28.339 [2024-07-12 11:28:54.317354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.339 [2024-07-12 11:28:54.317418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.339 qpair failed and we were unable to recover it. 00:24:28.339 [2024-07-12 11:28:54.317652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.339 [2024-07-12 11:28:54.317716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.339 qpair failed and we were unable to recover it. 00:24:28.339 [2024-07-12 11:28:54.317943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.339 [2024-07-12 11:28:54.317978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.339 qpair failed and we were unable to recover it. 00:24:28.339 [2024-07-12 11:28:54.318101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.339 [2024-07-12 11:28:54.318135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.339 qpair failed and we were unable to recover it. 00:24:28.339 [2024-07-12 11:28:54.318309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.339 [2024-07-12 11:28:54.318344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.339 qpair failed and we were unable to recover it. 00:24:28.339 [2024-07-12 11:28:54.318546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.339 [2024-07-12 11:28:54.318611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.339 qpair failed and we were unable to recover it. 00:24:28.339 [2024-07-12 11:28:54.318824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.339 [2024-07-12 11:28:54.318877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.339 qpair failed and we were unable to recover it. 00:24:28.339 [2024-07-12 11:28:54.318998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.339 [2024-07-12 11:28:54.319031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.339 qpair failed and we were unable to recover it. 00:24:28.339 [2024-07-12 11:28:54.319185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.339 [2024-07-12 11:28:54.319250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.339 qpair failed and we were unable to recover it. 00:24:28.339 [2024-07-12 11:28:54.319517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.339 [2024-07-12 11:28:54.319582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.339 qpair failed and we were unable to recover it. 00:24:28.339 [2024-07-12 11:28:54.319824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.339 [2024-07-12 11:28:54.319888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.339 qpair failed and we were unable to recover it. 00:24:28.339 [2024-07-12 11:28:54.319995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.339 [2024-07-12 11:28:54.320029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.339 qpair failed and we were unable to recover it. 00:24:28.339 [2024-07-12 11:28:54.320278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.339 [2024-07-12 11:28:54.320342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.339 qpair failed and we were unable to recover it. 00:24:28.339 [2024-07-12 11:28:54.320629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.339 [2024-07-12 11:28:54.320704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.339 qpair failed and we were unable to recover it. 00:24:28.339 [2024-07-12 11:28:54.320929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.339 [2024-07-12 11:28:54.320964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.339 qpair failed and we were unable to recover it. 00:24:28.339 [2024-07-12 11:28:54.321105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.339 [2024-07-12 11:28:54.321140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.339 qpair failed and we were unable to recover it. 00:24:28.339 [2024-07-12 11:28:54.321334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.339 [2024-07-12 11:28:54.321368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.339 qpair failed and we were unable to recover it. 00:24:28.339 [2024-07-12 11:28:54.321536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.339 [2024-07-12 11:28:54.321570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.339 qpair failed and we were unable to recover it. 00:24:28.339 [2024-07-12 11:28:54.321680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.339 [2024-07-12 11:28:54.321713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.339 qpair failed and we were unable to recover it. 00:24:28.339 [2024-07-12 11:28:54.321825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.339 [2024-07-12 11:28:54.321876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.339 qpair failed and we were unable to recover it. 00:24:28.340 [2024-07-12 11:28:54.322020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.340 [2024-07-12 11:28:54.322054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-07-12 11:28:54.322305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.340 [2024-07-12 11:28:54.322369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-07-12 11:28:54.322553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.340 [2024-07-12 11:28:54.322625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-07-12 11:28:54.322837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.340 [2024-07-12 11:28:54.322878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-07-12 11:28:54.322985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.340 [2024-07-12 11:28:54.323019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-07-12 11:28:54.323213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.340 [2024-07-12 11:28:54.323277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-07-12 11:28:54.323501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.340 [2024-07-12 11:28:54.323565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-07-12 11:28:54.323771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.340 [2024-07-12 11:28:54.323804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-07-12 11:28:54.323960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.340 [2024-07-12 11:28:54.323995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-07-12 11:28:54.324118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.340 [2024-07-12 11:28:54.324151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-07-12 11:28:54.324261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.340 [2024-07-12 11:28:54.324296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-07-12 11:28:54.324474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.340 [2024-07-12 11:28:54.324540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-07-12 11:28:54.324743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.340 [2024-07-12 11:28:54.324789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-07-12 11:28:54.324938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.340 [2024-07-12 11:28:54.324974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-07-12 11:28:54.325084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.340 [2024-07-12 11:28:54.325118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-07-12 11:28:54.325285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.340 [2024-07-12 11:28:54.325348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-07-12 11:28:54.325564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.340 [2024-07-12 11:28:54.325641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-07-12 11:28:54.325838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.340 [2024-07-12 11:28:54.325880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-07-12 11:28:54.325999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.340 [2024-07-12 11:28:54.326033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-07-12 11:28:54.326233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.340 [2024-07-12 11:28:54.326297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-07-12 11:28:54.326546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.340 [2024-07-12 11:28:54.326612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-07-12 11:28:54.326801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.340 [2024-07-12 11:28:54.326835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-07-12 11:28:54.326956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.340 [2024-07-12 11:28:54.326991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-07-12 11:28:54.327096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.340 [2024-07-12 11:28:54.327131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-07-12 11:28:54.327308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.340 [2024-07-12 11:28:54.327342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-07-12 11:28:54.327581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.340 [2024-07-12 11:28:54.327646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-07-12 11:28:54.327914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.340 [2024-07-12 11:28:54.327948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-07-12 11:28:54.328057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.340 [2024-07-12 11:28:54.328091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-07-12 11:28:54.328205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.340 [2024-07-12 11:28:54.328275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-07-12 11:28:54.328564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.340 [2024-07-12 11:28:54.328628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-07-12 11:28:54.328882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.340 [2024-07-12 11:28:54.328917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-07-12 11:28:54.329024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.340 [2024-07-12 11:28:54.329058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-07-12 11:28:54.329179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.340 [2024-07-12 11:28:54.329214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-07-12 11:28:54.329427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.340 [2024-07-12 11:28:54.329491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-07-12 11:28:54.329721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.340 [2024-07-12 11:28:54.329784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-07-12 11:28:54.329967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.340 [2024-07-12 11:28:54.330002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-07-12 11:28:54.330251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.340 [2024-07-12 11:28:54.330315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-07-12 11:28:54.330601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.340 [2024-07-12 11:28:54.330664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-07-12 11:28:54.330923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.340 [2024-07-12 11:28:54.330957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-07-12 11:28:54.331081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.340 [2024-07-12 11:28:54.331115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.340 qpair failed and we were unable to recover it. 00:24:28.340 [2024-07-12 11:28:54.331286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.340 [2024-07-12 11:28:54.331366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-07-12 11:28:54.331641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.341 [2024-07-12 11:28:54.331700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-07-12 11:28:54.331945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.341 [2024-07-12 11:28:54.331979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-07-12 11:28:54.332088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.341 [2024-07-12 11:28:54.332122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-07-12 11:28:54.332250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.341 [2024-07-12 11:28:54.332283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-07-12 11:28:54.332458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.341 [2024-07-12 11:28:54.332516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-07-12 11:28:54.332753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.341 [2024-07-12 11:28:54.332811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-07-12 11:28:54.333046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.341 [2024-07-12 11:28:54.333081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-07-12 11:28:54.333277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.341 [2024-07-12 11:28:54.333311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-07-12 11:28:54.333479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.341 [2024-07-12 11:28:54.333513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-07-12 11:28:54.333707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.341 [2024-07-12 11:28:54.333767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-07-12 11:28:54.333953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.341 [2024-07-12 11:28:54.333989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-07-12 11:28:54.334174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.341 [2024-07-12 11:28:54.334252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-07-12 11:28:54.334454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.341 [2024-07-12 11:28:54.334513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-07-12 11:28:54.334761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.341 [2024-07-12 11:28:54.334819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-07-12 11:28:54.335023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.341 [2024-07-12 11:28:54.335058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-07-12 11:28:54.335257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.341 [2024-07-12 11:28:54.335334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-07-12 11:28:54.335572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.341 [2024-07-12 11:28:54.335606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-07-12 11:28:54.335749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.341 [2024-07-12 11:28:54.335783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-07-12 11:28:54.335921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.341 [2024-07-12 11:28:54.335957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-07-12 11:28:54.336070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.341 [2024-07-12 11:28:54.336104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-07-12 11:28:54.336297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.341 [2024-07-12 11:28:54.336374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-07-12 11:28:54.336561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.341 [2024-07-12 11:28:54.336621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-07-12 11:28:54.336824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.341 [2024-07-12 11:28:54.336859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-07-12 11:28:54.337036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.341 [2024-07-12 11:28:54.337069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-07-12 11:28:54.337188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.341 [2024-07-12 11:28:54.337250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-07-12 11:28:54.337543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.341 [2024-07-12 11:28:54.337626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-07-12 11:28:54.337833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.341 [2024-07-12 11:28:54.337873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-07-12 11:28:54.338021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.341 [2024-07-12 11:28:54.338055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-07-12 11:28:54.338233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.341 [2024-07-12 11:28:54.338309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-07-12 11:28:54.338574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.341 [2024-07-12 11:28:54.338651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-07-12 11:28:54.338852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.341 [2024-07-12 11:28:54.338893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-07-12 11:28:54.339047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.341 [2024-07-12 11:28:54.339080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-07-12 11:28:54.339199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.341 [2024-07-12 11:28:54.339235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-07-12 11:28:54.339385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.341 [2024-07-12 11:28:54.339445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-07-12 11:28:54.339665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.341 [2024-07-12 11:28:54.339698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-07-12 11:28:54.339830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.341 [2024-07-12 11:28:54.339863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-07-12 11:28:54.339984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.341 [2024-07-12 11:28:54.340019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-07-12 11:28:54.340141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.341 [2024-07-12 11:28:54.340175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-07-12 11:28:54.340369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.341 [2024-07-12 11:28:54.340428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-07-12 11:28:54.340660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.341 [2024-07-12 11:28:54.340718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.341 qpair failed and we were unable to recover it. 00:24:28.341 [2024-07-12 11:28:54.340932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.342 [2024-07-12 11:28:54.340967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.342 qpair failed and we were unable to recover it. 00:24:28.342 [2024-07-12 11:28:54.341117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.342 [2024-07-12 11:28:54.341151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.342 qpair failed and we were unable to recover it. 00:24:28.342 [2024-07-12 11:28:54.341296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.342 [2024-07-12 11:28:54.341334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.342 qpair failed and we were unable to recover it. 00:24:28.342 [2024-07-12 11:28:54.341474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.342 [2024-07-12 11:28:54.341508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.342 qpair failed and we were unable to recover it. 00:24:28.342 [2024-07-12 11:28:54.341651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.342 [2024-07-12 11:28:54.341684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.342 qpair failed and we were unable to recover it. 00:24:28.342 [2024-07-12 11:28:54.341816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.342 [2024-07-12 11:28:54.341850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.342 qpair failed and we were unable to recover it. 00:24:28.342 [2024-07-12 11:28:54.342010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.342 [2024-07-12 11:28:54.342043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.342 qpair failed and we were unable to recover it. 00:24:28.342 [2024-07-12 11:28:54.342164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.342 [2024-07-12 11:28:54.342198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.342 qpair failed and we were unable to recover it. 00:24:28.342 [2024-07-12 11:28:54.342370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.342 [2024-07-12 11:28:54.342403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.342 qpair failed and we were unable to recover it. 00:24:28.342 [2024-07-12 11:28:54.342538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.342 [2024-07-12 11:28:54.342572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.342 qpair failed and we were unable to recover it. 00:24:28.342 [2024-07-12 11:28:54.342701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.342 [2024-07-12 11:28:54.342735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.342 qpair failed and we were unable to recover it. 00:24:28.342 [2024-07-12 11:28:54.342882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.342 [2024-07-12 11:28:54.342916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.342 qpair failed and we were unable to recover it. 00:24:28.342 [2024-07-12 11:28:54.343055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.342 [2024-07-12 11:28:54.343089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.342 qpair failed and we were unable to recover it. 00:24:28.342 [2024-07-12 11:28:54.343226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.342 [2024-07-12 11:28:54.343260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.342 qpair failed and we were unable to recover it. 00:24:28.342 [2024-07-12 11:28:54.343405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.342 [2024-07-12 11:28:54.343438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.342 qpair failed and we were unable to recover it. 00:24:28.342 [2024-07-12 11:28:54.343552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.342 [2024-07-12 11:28:54.343586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.342 qpair failed and we were unable to recover it. 00:24:28.342 [2024-07-12 11:28:54.343741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.342 [2024-07-12 11:28:54.343775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.342 qpair failed and we were unable to recover it. 00:24:28.342 [2024-07-12 11:28:54.343888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.342 [2024-07-12 11:28:54.343923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.342 qpair failed and we were unable to recover it. 00:24:28.342 [2024-07-12 11:28:54.344083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.342 [2024-07-12 11:28:54.344117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.342 qpair failed and we were unable to recover it. 00:24:28.342 [2024-07-12 11:28:54.344238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.342 [2024-07-12 11:28:54.344271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.342 qpair failed and we were unable to recover it. 00:24:28.342 [2024-07-12 11:28:54.344408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.342 [2024-07-12 11:28:54.344441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.342 qpair failed and we were unable to recover it. 00:24:28.342 [2024-07-12 11:28:54.344574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.342 [2024-07-12 11:28:54.344607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.342 qpair failed and we were unable to recover it. 00:24:28.342 [2024-07-12 11:28:54.344744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.342 [2024-07-12 11:28:54.344778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.342 qpair failed and we were unable to recover it. 00:24:28.342 [2024-07-12 11:28:54.344897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.342 [2024-07-12 11:28:54.344933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.342 qpair failed and we were unable to recover it. 00:24:28.342 [2024-07-12 11:28:54.345077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.342 [2024-07-12 11:28:54.345112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.342 qpair failed and we were unable to recover it. 00:24:28.342 [2024-07-12 11:28:54.345244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.342 [2024-07-12 11:28:54.345278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.342 qpair failed and we were unable to recover it. 00:24:28.342 [2024-07-12 11:28:54.345390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.342 [2024-07-12 11:28:54.345424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.342 qpair failed and we were unable to recover it. 00:24:28.342 [2024-07-12 11:28:54.345564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.342 [2024-07-12 11:28:54.345598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.342 qpair failed and we were unable to recover it. 00:24:28.342 [2024-07-12 11:28:54.345701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.342 [2024-07-12 11:28:54.345734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.342 qpair failed and we were unable to recover it. 00:24:28.342 [2024-07-12 11:28:54.345880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.342 [2024-07-12 11:28:54.345915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.342 qpair failed and we were unable to recover it. 00:24:28.342 [2024-07-12 11:28:54.346062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.342 [2024-07-12 11:28:54.346095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.342 qpair failed and we were unable to recover it. 00:24:28.342 [2024-07-12 11:28:54.346230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.342 [2024-07-12 11:28:54.346264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.342 qpair failed and we were unable to recover it. 00:24:28.342 [2024-07-12 11:28:54.346433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.342 [2024-07-12 11:28:54.346467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.342 qpair failed and we were unable to recover it. 00:24:28.342 [2024-07-12 11:28:54.346637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.342 [2024-07-12 11:28:54.346671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.342 qpair failed and we were unable to recover it. 00:24:28.342 [2024-07-12 11:28:54.346893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.342 [2024-07-12 11:28:54.346947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.342 qpair failed and we were unable to recover it. 00:24:28.342 [2024-07-12 11:28:54.347208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.343 [2024-07-12 11:28:54.347281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-07-12 11:28:54.347503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.343 [2024-07-12 11:28:54.347576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-07-12 11:28:54.347800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.343 [2024-07-12 11:28:54.347855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-07-12 11:28:54.348101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.343 [2024-07-12 11:28:54.348175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-07-12 11:28:54.348408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.343 [2024-07-12 11:28:54.348441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-07-12 11:28:54.348581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.343 [2024-07-12 11:28:54.348615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-07-12 11:28:54.348798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.343 [2024-07-12 11:28:54.348854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-07-12 11:28:54.349133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.343 [2024-07-12 11:28:54.349218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-07-12 11:28:54.349460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.343 [2024-07-12 11:28:54.349515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-07-12 11:28:54.349724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.343 [2024-07-12 11:28:54.349778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-07-12 11:28:54.350001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.343 [2024-07-12 11:28:54.350076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-07-12 11:28:54.350344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.343 [2024-07-12 11:28:54.350378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-07-12 11:28:54.350518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.343 [2024-07-12 11:28:54.350551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-07-12 11:28:54.350737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.343 [2024-07-12 11:28:54.350770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-07-12 11:28:54.350898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.343 [2024-07-12 11:28:54.350932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-07-12 11:28:54.351078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.343 [2024-07-12 11:28:54.351111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-07-12 11:28:54.351353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.343 [2024-07-12 11:28:54.351426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-07-12 11:28:54.351601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.343 [2024-07-12 11:28:54.351655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-07-12 11:28:54.351883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.343 [2024-07-12 11:28:54.351938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-07-12 11:28:54.352182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.343 [2024-07-12 11:28:54.352254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-07-12 11:28:54.352519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.343 [2024-07-12 11:28:54.352591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-07-12 11:28:54.352862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.343 [2024-07-12 11:28:54.352933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-07-12 11:28:54.353145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.343 [2024-07-12 11:28:54.353219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-07-12 11:28:54.353463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.343 [2024-07-12 11:28:54.353536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-07-12 11:28:54.353757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.343 [2024-07-12 11:28:54.353811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-07-12 11:28:54.354019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.343 [2024-07-12 11:28:54.354093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-07-12 11:28:54.354340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.343 [2024-07-12 11:28:54.354414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-07-12 11:28:54.354712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.343 [2024-07-12 11:28:54.354785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-07-12 11:28:54.355015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.343 [2024-07-12 11:28:54.355089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-07-12 11:28:54.355328] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:24:28.343 [2024-07-12 11:28:54.355348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.343 [2024-07-12 11:28:54.355418] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:28.343 [2024-07-12 11:28:54.355422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-07-12 11:28:54.355671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.343 [2024-07-12 11:28:54.355741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-07-12 11:28:54.355945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.343 [2024-07-12 11:28:54.356003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-07-12 11:28:54.356306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.343 [2024-07-12 11:28:54.356380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-07-12 11:28:54.356653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.343 [2024-07-12 11:28:54.356725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-07-12 11:28:54.357009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.343 [2024-07-12 11:28:54.357084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-07-12 11:28:54.357314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.343 [2024-07-12 11:28:54.357394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-07-12 11:28:54.357684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.343 [2024-07-12 11:28:54.357758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-07-12 11:28:54.357976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.343 [2024-07-12 11:28:54.358048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-07-12 11:28:54.358341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.343 [2024-07-12 11:28:54.358414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.343 qpair failed and we were unable to recover it. 00:24:28.343 [2024-07-12 11:28:54.358653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.344 [2024-07-12 11:28:54.358707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-07-12 11:28:54.358941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.344 [2024-07-12 11:28:54.359019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-07-12 11:28:54.359310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.344 [2024-07-12 11:28:54.359388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-07-12 11:28:54.359638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.344 [2024-07-12 11:28:54.359693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-07-12 11:28:54.359894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.344 [2024-07-12 11:28:54.359950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-07-12 11:28:54.360216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.344 [2024-07-12 11:28:54.360290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-07-12 11:28:54.360528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.344 [2024-07-12 11:28:54.360600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-07-12 11:28:54.360882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.344 [2024-07-12 11:28:54.360938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-07-12 11:28:54.361140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.344 [2024-07-12 11:28:54.361212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-07-12 11:28:54.361502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.344 [2024-07-12 11:28:54.361577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-07-12 11:28:54.361795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.344 [2024-07-12 11:28:54.361850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-07-12 11:28:54.362084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.344 [2024-07-12 11:28:54.362157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-07-12 11:28:54.362446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.344 [2024-07-12 11:28:54.362518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-07-12 11:28:54.362767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.344 [2024-07-12 11:28:54.362822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-07-12 11:28:54.363081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.344 [2024-07-12 11:28:54.363155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-07-12 11:28:54.363411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.344 [2024-07-12 11:28:54.363483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-07-12 11:28:54.363684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.344 [2024-07-12 11:28:54.363739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-07-12 11:28:54.363945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.344 [2024-07-12 11:28:54.364023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-07-12 11:28:54.364312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.344 [2024-07-12 11:28:54.364386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-07-12 11:28:54.364616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.344 [2024-07-12 11:28:54.364687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-07-12 11:28:54.364891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.344 [2024-07-12 11:28:54.364955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-07-12 11:28:54.365173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.344 [2024-07-12 11:28:54.365246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-07-12 11:28:54.365437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.344 [2024-07-12 11:28:54.365513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-07-12 11:28:54.365732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.344 [2024-07-12 11:28:54.365787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-07-12 11:28:54.365924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.344 [2024-07-12 11:28:54.365957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-07-12 11:28:54.366100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.344 [2024-07-12 11:28:54.366162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-07-12 11:28:54.366315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.344 [2024-07-12 11:28:54.366370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-07-12 11:28:54.366616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.344 [2024-07-12 11:28:54.366650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-07-12 11:28:54.366842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.344 [2024-07-12 11:28:54.366917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-07-12 11:28:54.367054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.344 [2024-07-12 11:28:54.367081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-07-12 11:28:54.367192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.344 [2024-07-12 11:28:54.367217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-07-12 11:28:54.367356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.344 [2024-07-12 11:28:54.367412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-07-12 11:28:54.367664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.344 [2024-07-12 11:28:54.367718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-07-12 11:28:54.367975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.344 [2024-07-12 11:28:54.368002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-07-12 11:28:54.368121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.344 [2024-07-12 11:28:54.368147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-07-12 11:28:54.368289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.344 [2024-07-12 11:28:54.368356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-07-12 11:28:54.368567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.344 [2024-07-12 11:28:54.368621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-07-12 11:28:54.368858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.344 [2024-07-12 11:28:54.368895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-07-12 11:28:54.369043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.344 [2024-07-12 11:28:54.369068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-07-12 11:28:54.369162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.344 [2024-07-12 11:28:54.369212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.344 qpair failed and we were unable to recover it. 00:24:28.344 [2024-07-12 11:28:54.369389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.344 [2024-07-12 11:28:54.369416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.345 qpair failed and we were unable to recover it. 00:24:28.345 [2024-07-12 11:28:54.369642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.345 [2024-07-12 11:28:54.369696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.345 qpair failed and we were unable to recover it. 00:24:28.345 [2024-07-12 11:28:54.369860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.345 [2024-07-12 11:28:54.369892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.345 qpair failed and we were unable to recover it. 00:24:28.345 [2024-07-12 11:28:54.369978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.345 [2024-07-12 11:28:54.370005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.345 qpair failed and we were unable to recover it. 00:24:28.345 [2024-07-12 11:28:54.370086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.345 [2024-07-12 11:28:54.370139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.345 qpair failed and we were unable to recover it. 00:24:28.345 [2024-07-12 11:28:54.370378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.345 [2024-07-12 11:28:54.370451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.345 qpair failed and we were unable to recover it. 00:24:28.345 [2024-07-12 11:28:54.370673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.345 [2024-07-12 11:28:54.370699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.345 qpair failed and we were unable to recover it. 00:24:28.345 [2024-07-12 11:28:54.370820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.345 [2024-07-12 11:28:54.370847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.345 qpair failed and we were unable to recover it. 00:24:28.345 [2024-07-12 11:28:54.370949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.345 [2024-07-12 11:28:54.370974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.345 qpair failed and we were unable to recover it. 00:24:28.345 [2024-07-12 11:28:54.371062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.345 [2024-07-12 11:28:54.371087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.345 qpair failed and we were unable to recover it. 00:24:28.345 [2024-07-12 11:28:54.371216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.345 [2024-07-12 11:28:54.371273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.345 qpair failed and we were unable to recover it. 00:24:28.345 [2024-07-12 11:28:54.371545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.345 [2024-07-12 11:28:54.371599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.345 qpair failed and we were unable to recover it. 00:24:28.345 [2024-07-12 11:28:54.371828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.345 [2024-07-12 11:28:54.371884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.345 qpair failed and we were unable to recover it. 00:24:28.345 [2024-07-12 11:28:54.372019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.345 [2024-07-12 11:28:54.372092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.345 qpair failed and we were unable to recover it. 00:24:28.345 [2024-07-12 11:28:54.372348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.345 [2024-07-12 11:28:54.372402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.345 qpair failed and we were unable to recover it. 00:24:28.345 [2024-07-12 11:28:54.372679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.345 [2024-07-12 11:28:54.372751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.345 qpair failed and we were unable to recover it. 00:24:28.345 [2024-07-12 11:28:54.372945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.345 [2024-07-12 11:28:54.372972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.345 qpair failed and we were unable to recover it. 00:24:28.345 [2024-07-12 11:28:54.373123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.345 [2024-07-12 11:28:54.373199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.345 qpair failed and we were unable to recover it. 00:24:28.345 [2024-07-12 11:28:54.373507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.345 [2024-07-12 11:28:54.373579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.345 qpair failed and we were unable to recover it. 00:24:28.345 [2024-07-12 11:28:54.373759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.345 [2024-07-12 11:28:54.373784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.345 qpair failed and we were unable to recover it. 00:24:28.345 [2024-07-12 11:28:54.373906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.345 [2024-07-12 11:28:54.373937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.345 qpair failed and we were unable to recover it. 00:24:28.345 [2024-07-12 11:28:54.374024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.345 [2024-07-12 11:28:54.374050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.345 qpair failed and we were unable to recover it. 00:24:28.345 [2024-07-12 11:28:54.374253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.345 [2024-07-12 11:28:54.374307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.345 qpair failed and we were unable to recover it. 00:24:28.345 [2024-07-12 11:28:54.374530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.345 [2024-07-12 11:28:54.374580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.345 qpair failed and we were unable to recover it. 00:24:28.345 [2024-07-12 11:28:54.374811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.345 [2024-07-12 11:28:54.374878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.345 qpair failed and we were unable to recover it. 00:24:28.345 [2024-07-12 11:28:54.375018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.345 [2024-07-12 11:28:54.375044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.345 qpair failed and we were unable to recover it. 00:24:28.345 [2024-07-12 11:28:54.375201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.345 [2024-07-12 11:28:54.375254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.345 qpair failed and we were unable to recover it. 00:24:28.345 [2024-07-12 11:28:54.375441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.345 [2024-07-12 11:28:54.375493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.345 qpair failed and we were unable to recover it. 00:24:28.345 [2024-07-12 11:28:54.375720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.345 [2024-07-12 11:28:54.375746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.345 qpair failed and we were unable to recover it. 00:24:28.345 [2024-07-12 11:28:54.375830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.345 [2024-07-12 11:28:54.375855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.345 qpair failed and we were unable to recover it. 00:24:28.345 [2024-07-12 11:28:54.375987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.345 [2024-07-12 11:28:54.376013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.345 qpair failed and we were unable to recover it. 00:24:28.345 [2024-07-12 11:28:54.376186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.345 [2024-07-12 11:28:54.376258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.345 qpair failed and we were unable to recover it. 00:24:28.345 [2024-07-12 11:28:54.376477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.345 [2024-07-12 11:28:54.376531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.345 qpair failed and we were unable to recover it. 00:24:28.345 [2024-07-12 11:28:54.376746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.345 [2024-07-12 11:28:54.376772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.345 qpair failed and we were unable to recover it. 00:24:28.345 [2024-07-12 11:28:54.376863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.345 [2024-07-12 11:28:54.376897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.345 qpair failed and we were unable to recover it. 00:24:28.345 [2024-07-12 11:28:54.376994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.345 [2024-07-12 11:28:54.377021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.345 qpair failed and we were unable to recover it. 00:24:28.345 [2024-07-12 11:28:54.377103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.345 [2024-07-12 11:28:54.377129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.345 qpair failed and we were unable to recover it. 00:24:28.345 [2024-07-12 11:28:54.377221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.345 [2024-07-12 11:28:54.377246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.345 qpair failed and we were unable to recover it. 00:24:28.345 [2024-07-12 11:28:54.377460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.345 [2024-07-12 11:28:54.377537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.345 qpair failed and we were unable to recover it. 00:24:28.345 [2024-07-12 11:28:54.377757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.345 [2024-07-12 11:28:54.377812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.345 qpair failed and we were unable to recover it. 00:24:28.345 [2024-07-12 11:28:54.377944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.345 [2024-07-12 11:28:54.377970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.346 qpair failed and we were unable to recover it. 00:24:28.346 [2024-07-12 11:28:54.378085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.346 [2024-07-12 11:28:54.378111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.346 qpair failed and we were unable to recover it. 00:24:28.346 [2024-07-12 11:28:54.378274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.346 [2024-07-12 11:28:54.378328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.346 qpair failed and we were unable to recover it. 00:24:28.346 [2024-07-12 11:28:54.378544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.346 [2024-07-12 11:28:54.378599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.346 qpair failed and we were unable to recover it. 00:24:28.346 [2024-07-12 11:28:54.378816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.346 [2024-07-12 11:28:54.378842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.346 qpair failed and we were unable to recover it. 00:24:28.346 [2024-07-12 11:28:54.378989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.346 [2024-07-12 11:28:54.379016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.346 qpair failed and we were unable to recover it. 00:24:28.346 [2024-07-12 11:28:54.379130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.346 [2024-07-12 11:28:54.379156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.346 qpair failed and we were unable to recover it. 00:24:28.346 [2024-07-12 11:28:54.379262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.346 [2024-07-12 11:28:54.379303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.346 qpair failed and we were unable to recover it. 00:24:28.346 [2024-07-12 11:28:54.379538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.346 [2024-07-12 11:28:54.379568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.346 qpair failed and we were unable to recover it. 00:24:28.346 [2024-07-12 11:28:54.379804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.346 [2024-07-12 11:28:54.379896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.346 qpair failed and we were unable to recover it. 00:24:28.346 [2024-07-12 11:28:54.380019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.346 [2024-07-12 11:28:54.380074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.346 qpair failed and we were unable to recover it. 00:24:28.346 [2024-07-12 11:28:54.380318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.346 [2024-07-12 11:28:54.380386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.346 qpair failed and we were unable to recover it. 00:24:28.346 [2024-07-12 11:28:54.380621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.346 [2024-07-12 11:28:54.380694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.346 qpair failed and we were unable to recover it. 00:24:28.346 [2024-07-12 11:28:54.380809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.346 [2024-07-12 11:28:54.380836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.346 qpair failed and we were unable to recover it. 00:24:28.346 [2024-07-12 11:28:54.380965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.346 [2024-07-12 11:28:54.380992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.346 qpair failed and we were unable to recover it. 00:24:28.346 [2024-07-12 11:28:54.381147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.346 [2024-07-12 11:28:54.381244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.346 qpair failed and we were unable to recover it. 00:24:28.346 [2024-07-12 11:28:54.381580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.346 [2024-07-12 11:28:54.381656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.346 qpair failed and we were unable to recover it. 00:24:28.346 [2024-07-12 11:28:54.381779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.346 [2024-07-12 11:28:54.381803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.346 qpair failed and we were unable to recover it. 00:24:28.346 [2024-07-12 11:28:54.381916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.346 [2024-07-12 11:28:54.381946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.346 qpair failed and we were unable to recover it. 00:24:28.346 [2024-07-12 11:28:54.382067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.346 [2024-07-12 11:28:54.382127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.346 qpair failed and we were unable to recover it. 00:24:28.346 [2024-07-12 11:28:54.382431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.346 [2024-07-12 11:28:54.382511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.346 qpair failed and we were unable to recover it. 00:24:28.346 [2024-07-12 11:28:54.382715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.346 [2024-07-12 11:28:54.382779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.346 qpair failed and we were unable to recover it. 00:24:28.346 [2024-07-12 11:28:54.382978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.346 [2024-07-12 11:28:54.383005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.346 qpair failed and we were unable to recover it. 00:24:28.346 [2024-07-12 11:28:54.383173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.346 [2024-07-12 11:28:54.383270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.346 qpair failed and we were unable to recover it. 00:24:28.346 [2024-07-12 11:28:54.383542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.346 [2024-07-12 11:28:54.383608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.346 qpair failed and we were unable to recover it. 00:24:28.346 [2024-07-12 11:28:54.383845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.346 [2024-07-12 11:28:54.383879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.346 qpair failed and we were unable to recover it. 00:24:28.346 [2024-07-12 11:28:54.383995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.346 [2024-07-12 11:28:54.384021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.346 qpair failed and we were unable to recover it. 00:24:28.346 [2024-07-12 11:28:54.384141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.346 [2024-07-12 11:28:54.384168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.346 qpair failed and we were unable to recover it. 00:24:28.346 [2024-07-12 11:28:54.384459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.346 [2024-07-12 11:28:54.384537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.346 qpair failed and we were unable to recover it. 00:24:28.346 [2024-07-12 11:28:54.384873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.346 [2024-07-12 11:28:54.384900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.346 qpair failed and we were unable to recover it. 00:24:28.346 [2024-07-12 11:28:54.385015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.346 [2024-07-12 11:28:54.385045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.346 qpair failed and we were unable to recover it. 00:24:28.346 [2024-07-12 11:28:54.385211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.346 [2024-07-12 11:28:54.385290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.346 qpair failed and we were unable to recover it. 00:24:28.346 [2024-07-12 11:28:54.385480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.346 [2024-07-12 11:28:54.385507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.346 qpair failed and we were unable to recover it. 00:24:28.346 [2024-07-12 11:28:54.385776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.346 [2024-07-12 11:28:54.385842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.346 qpair failed and we were unable to recover it. 00:24:28.346 [2024-07-12 11:28:54.386037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.346 [2024-07-12 11:28:54.386065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.346 qpair failed and we were unable to recover it. 00:24:28.346 [2024-07-12 11:28:54.386256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.347 [2024-07-12 11:28:54.386283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.347 qpair failed and we were unable to recover it. 00:24:28.347 [2024-07-12 11:28:54.386523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.347 [2024-07-12 11:28:54.386591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.347 qpair failed and we were unable to recover it. 00:24:28.347 [2024-07-12 11:28:54.386855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.347 [2024-07-12 11:28:54.386935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.347 qpair failed and we were unable to recover it. 00:24:28.347 [2024-07-12 11:28:54.387052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.347 [2024-07-12 11:28:54.387079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.347 qpair failed and we were unable to recover it. 00:24:28.347 [2024-07-12 11:28:54.387197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.347 [2024-07-12 11:28:54.387257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.347 qpair failed and we were unable to recover it. 00:24:28.347 [2024-07-12 11:28:54.387575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.347 [2024-07-12 11:28:54.387644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.347 qpair failed and we were unable to recover it. 00:24:28.347 [2024-07-12 11:28:54.387848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.347 [2024-07-12 11:28:54.387928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.347 qpair failed and we were unable to recover it. 00:24:28.347 [2024-07-12 11:28:54.388047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.347 [2024-07-12 11:28:54.388074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.347 qpair failed and we were unable to recover it. 00:24:28.347 [2024-07-12 11:28:54.388194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.347 [2024-07-12 11:28:54.388259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.347 qpair failed and we were unable to recover it. 00:24:28.347 [2024-07-12 11:28:54.388567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.347 [2024-07-12 11:28:54.388645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.347 qpair failed and we were unable to recover it. 00:24:28.347 [2024-07-12 11:28:54.388902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.347 [2024-07-12 11:28:54.388930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.347 qpair failed and we were unable to recover it. 00:24:28.347 [2024-07-12 11:28:54.389044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.347 [2024-07-12 11:28:54.389097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.347 qpair failed and we were unable to recover it. 00:24:28.347 [2024-07-12 11:28:54.389407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.347 [2024-07-12 11:28:54.389504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.347 qpair failed and we were unable to recover it. 00:24:28.347 [2024-07-12 11:28:54.389745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.347 [2024-07-12 11:28:54.389772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.347 qpair failed and we were unable to recover it. 00:24:28.347 [2024-07-12 11:28:54.389872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.347 [2024-07-12 11:28:54.389900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.347 qpair failed and we were unable to recover it. 00:24:28.347 [2024-07-12 11:28:54.390016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.347 [2024-07-12 11:28:54.390041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.347 qpair failed and we were unable to recover it. 00:24:28.347 [2024-07-12 11:28:54.390133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.347 [2024-07-12 11:28:54.390198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.347 qpair failed and we were unable to recover it. 00:24:28.347 [2024-07-12 11:28:54.390329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.347 [2024-07-12 11:28:54.390395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.347 qpair failed and we were unable to recover it. 00:24:28.347 [2024-07-12 11:28:54.390601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.347 [2024-07-12 11:28:54.390663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.347 qpair failed and we were unable to recover it. 00:24:28.347 [2024-07-12 11:28:54.390862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.347 [2024-07-12 11:28:54.390896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.347 qpair failed and we were unable to recover it. 00:24:28.347 [2024-07-12 11:28:54.391013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.347 [2024-07-12 11:28:54.391040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.347 qpair failed and we were unable to recover it. 00:24:28.347 [2024-07-12 11:28:54.391129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.347 [2024-07-12 11:28:54.391155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.347 qpair failed and we were unable to recover it. 00:24:28.347 [2024-07-12 11:28:54.391312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.347 [2024-07-12 11:28:54.391368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.347 qpair failed and we were unable to recover it. 00:24:28.347 [2024-07-12 11:28:54.391620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.347 [2024-07-12 11:28:54.391684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.347 qpair failed and we were unable to recover it. 00:24:28.347 [2024-07-12 11:28:54.391900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.347 [2024-07-12 11:28:54.391927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.347 qpair failed and we were unable to recover it. 00:24:28.347 [2024-07-12 11:28:54.392024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.347 [2024-07-12 11:28:54.392050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.347 qpair failed and we were unable to recover it. 00:24:28.347 [2024-07-12 11:28:54.392138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.347 [2024-07-12 11:28:54.392215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.347 qpair failed and we were unable to recover it. 00:24:28.347 [2024-07-12 11:28:54.392439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.347 [2024-07-12 11:28:54.392503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.347 qpair failed and we were unable to recover it. 00:24:28.347 EAL: No free 2048 kB hugepages reported on node 1 00:24:28.347 [2024-07-12 11:28:54.392807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.347 [2024-07-12 11:28:54.392889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.347 qpair failed and we were unable to recover it. 00:24:28.347 [2024-07-12 11:28:54.393031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.347 [2024-07-12 11:28:54.393055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.347 qpair failed and we were unable to recover it. 00:24:28.347 [2024-07-12 11:28:54.393173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.347 [2024-07-12 11:28:54.393199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.347 qpair failed and we were unable to recover it. 00:24:28.347 [2024-07-12 11:28:54.393292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.347 [2024-07-12 11:28:54.393318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.347 qpair failed and we were unable to recover it. 00:24:28.347 [2024-07-12 11:28:54.393489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.347 [2024-07-12 11:28:54.393552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.347 qpair failed and we were unable to recover it. 00:24:28.347 [2024-07-12 11:28:54.393812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.347 [2024-07-12 11:28:54.393894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.347 qpair failed and we were unable to recover it. 00:24:28.347 [2024-07-12 11:28:54.394133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.347 [2024-07-12 11:28:54.394197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.347 qpair failed and we were unable to recover it. 00:24:28.347 [2024-07-12 11:28:54.394503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.347 [2024-07-12 11:28:54.394576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.347 qpair failed and we were unable to recover it. 00:24:28.347 [2024-07-12 11:28:54.394771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.347 [2024-07-12 11:28:54.394796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.347 qpair failed and we were unable to recover it. 00:24:28.347 [2024-07-12 11:28:54.394889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.347 [2024-07-12 11:28:54.394916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.347 qpair failed and we were unable to recover it. 00:24:28.347 [2024-07-12 11:28:54.395008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.347 [2024-07-12 11:28:54.395034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.347 qpair failed and we were unable to recover it. 00:24:28.347 [2024-07-12 11:28:54.395164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.347 [2024-07-12 11:28:54.395203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.347 qpair failed and we were unable to recover it. 00:24:28.347 [2024-07-12 11:28:54.395421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.347 [2024-07-12 11:28:54.395451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.348 qpair failed and we were unable to recover it. 00:24:28.348 [2024-07-12 11:28:54.395684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.348 [2024-07-12 11:28:54.395753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.348 qpair failed and we were unable to recover it. 00:24:28.348 [2024-07-12 11:28:54.395987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.348 [2024-07-12 11:28:54.396016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.348 qpair failed and we were unable to recover it. 00:24:28.348 [2024-07-12 11:28:54.396110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.348 [2024-07-12 11:28:54.396137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.348 qpair failed and we were unable to recover it. 00:24:28.348 [2024-07-12 11:28:54.396228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.348 [2024-07-12 11:28:54.396257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.348 qpair failed and we were unable to recover it. 00:24:28.348 [2024-07-12 11:28:54.396468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.348 [2024-07-12 11:28:54.396535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.348 qpair failed and we were unable to recover it. 00:24:28.348 [2024-07-12 11:28:54.396686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.348 [2024-07-12 11:28:54.396712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.348 qpair failed and we were unable to recover it. 00:24:28.348 [2024-07-12 11:28:54.396805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.348 [2024-07-12 11:28:54.396831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.348 qpair failed and we were unable to recover it. 00:24:28.348 [2024-07-12 11:28:54.396924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.348 [2024-07-12 11:28:54.396951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.348 qpair failed and we were unable to recover it. 00:24:28.348 [2024-07-12 11:28:54.397040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.348 [2024-07-12 11:28:54.397066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.348 qpair failed and we were unable to recover it. 00:24:28.348 [2024-07-12 11:28:54.397174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.348 [2024-07-12 11:28:54.397200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.348 qpair failed and we were unable to recover it. 00:24:28.348 [2024-07-12 11:28:54.397285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.348 [2024-07-12 11:28:54.397310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.348 qpair failed and we were unable to recover it. 00:24:28.348 [2024-07-12 11:28:54.397390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.348 [2024-07-12 11:28:54.397416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.348 qpair failed and we were unable to recover it. 00:24:28.348 [2024-07-12 11:28:54.397500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.348 [2024-07-12 11:28:54.397525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.348 qpair failed and we were unable to recover it. 00:24:28.348 [2024-07-12 11:28:54.397606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.348 [2024-07-12 11:28:54.397631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.348 qpair failed and we were unable to recover it. 00:24:28.348 [2024-07-12 11:28:54.397705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.348 [2024-07-12 11:28:54.397731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.348 qpair failed and we were unable to recover it. 00:24:28.348 [2024-07-12 11:28:54.397823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.348 [2024-07-12 11:28:54.397848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.348 qpair failed and we were unable to recover it. 00:24:28.348 [2024-07-12 11:28:54.397947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.348 [2024-07-12 11:28:54.397972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.348 qpair failed and we were unable to recover it. 00:24:28.348 [2024-07-12 11:28:54.398061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.348 [2024-07-12 11:28:54.398087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.348 qpair failed and we were unable to recover it. 00:24:28.348 [2024-07-12 11:28:54.398200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.348 [2024-07-12 11:28:54.398226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.348 qpair failed and we were unable to recover it. 00:24:28.348 [2024-07-12 11:28:54.398306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.348 [2024-07-12 11:28:54.398331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.348 qpair failed and we were unable to recover it. 00:24:28.348 [2024-07-12 11:28:54.398437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.348 [2024-07-12 11:28:54.398463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.348 qpair failed and we were unable to recover it. 00:24:28.348 [2024-07-12 11:28:54.398558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.348 [2024-07-12 11:28:54.398583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.348 qpair failed and we were unable to recover it. 00:24:28.348 [2024-07-12 11:28:54.398658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.348 [2024-07-12 11:28:54.398684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.348 qpair failed and we were unable to recover it. 00:24:28.348 [2024-07-12 11:28:54.398795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.348 [2024-07-12 11:28:54.398821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.348 qpair failed and we were unable to recover it. 00:24:28.348 [2024-07-12 11:28:54.398909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.348 [2024-07-12 11:28:54.398935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.348 qpair failed and we were unable to recover it. 00:24:28.348 [2024-07-12 11:28:54.399019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.348 [2024-07-12 11:28:54.399048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.348 qpair failed and we were unable to recover it. 00:24:28.348 [2024-07-12 11:28:54.399196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.348 [2024-07-12 11:28:54.399221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.348 qpair failed and we were unable to recover it. 00:24:28.348 [2024-07-12 11:28:54.399333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.348 [2024-07-12 11:28:54.399359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.348 qpair failed and we were unable to recover it. 00:24:28.348 [2024-07-12 11:28:54.399468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.348 [2024-07-12 11:28:54.399494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.348 qpair failed and we were unable to recover it. 00:24:28.348 [2024-07-12 11:28:54.399575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.348 [2024-07-12 11:28:54.399600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.348 qpair failed and we were unable to recover it. 00:24:28.348 [2024-07-12 11:28:54.399709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.348 [2024-07-12 11:28:54.399750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.348 qpair failed and we were unable to recover it. 00:24:28.348 [2024-07-12 11:28:54.399884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.348 [2024-07-12 11:28:54.399913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.348 qpair failed and we were unable to recover it. 00:24:28.348 [2024-07-12 11:28:54.400024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.348 [2024-07-12 11:28:54.400051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.348 qpair failed and we were unable to recover it. 00:24:28.348 [2024-07-12 11:28:54.400175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.348 [2024-07-12 11:28:54.400201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.348 qpair failed and we were unable to recover it. 00:24:28.348 [2024-07-12 11:28:54.400343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.348 [2024-07-12 11:28:54.400369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.348 qpair failed and we were unable to recover it. 00:24:28.348 [2024-07-12 11:28:54.400453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.348 [2024-07-12 11:28:54.400479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.348 qpair failed and we were unable to recover it. 00:24:28.348 [2024-07-12 11:28:54.400570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.348 [2024-07-12 11:28:54.400597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.348 qpair failed and we were unable to recover it. 00:24:28.348 [2024-07-12 11:28:54.400712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.348 [2024-07-12 11:28:54.400737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.348 qpair failed and we were unable to recover it. 00:24:28.348 [2024-07-12 11:28:54.400825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.348 [2024-07-12 11:28:54.400850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.348 qpair failed and we were unable to recover it. 00:24:28.348 [2024-07-12 11:28:54.400951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.348 [2024-07-12 11:28:54.400977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.348 qpair failed and we were unable to recover it. 00:24:28.348 [2024-07-12 11:28:54.401086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.349 [2024-07-12 11:28:54.401112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.349 qpair failed and we were unable to recover it. 00:24:28.349 [2024-07-12 11:28:54.401227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.349 [2024-07-12 11:28:54.401253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.349 qpair failed and we were unable to recover it. 00:24:28.349 [2024-07-12 11:28:54.401363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.349 [2024-07-12 11:28:54.401389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.349 qpair failed and we were unable to recover it. 00:24:28.349 [2024-07-12 11:28:54.401504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.349 [2024-07-12 11:28:54.401534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.349 qpair failed and we were unable to recover it. 00:24:28.349 [2024-07-12 11:28:54.401626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.349 [2024-07-12 11:28:54.401654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.349 qpair failed and we were unable to recover it. 00:24:28.349 [2024-07-12 11:28:54.401766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.349 [2024-07-12 11:28:54.401793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.349 qpair failed and we were unable to recover it. 00:24:28.349 [2024-07-12 11:28:54.401898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.349 [2024-07-12 11:28:54.401924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.349 qpair failed and we were unable to recover it. 00:24:28.349 [2024-07-12 11:28:54.402035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.349 [2024-07-12 11:28:54.402061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.349 qpair failed and we were unable to recover it. 00:24:28.349 [2024-07-12 11:28:54.402158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.349 [2024-07-12 11:28:54.402184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.349 qpair failed and we were unable to recover it. 00:24:28.349 [2024-07-12 11:28:54.402299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.349 [2024-07-12 11:28:54.402325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.349 qpair failed and we were unable to recover it. 00:24:28.349 [2024-07-12 11:28:54.402407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.349 [2024-07-12 11:28:54.402433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.349 qpair failed and we were unable to recover it. 00:24:28.349 [2024-07-12 11:28:54.402521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.349 [2024-07-12 11:28:54.402549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.349 qpair failed and we were unable to recover it. 00:24:28.349 [2024-07-12 11:28:54.402697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.349 [2024-07-12 11:28:54.402727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.349 qpair failed and we were unable to recover it. 00:24:28.349 [2024-07-12 11:28:54.402818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.349 [2024-07-12 11:28:54.402844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.349 qpair failed and we were unable to recover it. 00:24:28.349 [2024-07-12 11:28:54.402973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.349 [2024-07-12 11:28:54.402999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.349 qpair failed and we were unable to recover it. 00:24:28.349 [2024-07-12 11:28:54.403114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.349 [2024-07-12 11:28:54.403139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.349 qpair failed and we were unable to recover it. 00:24:28.349 [2024-07-12 11:28:54.403240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.349 [2024-07-12 11:28:54.403266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.349 qpair failed and we were unable to recover it. 00:24:28.349 [2024-07-12 11:28:54.403354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.349 [2024-07-12 11:28:54.403381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.349 qpair failed and we were unable to recover it. 00:24:28.349 [2024-07-12 11:28:54.403500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.349 [2024-07-12 11:28:54.403529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.349 qpair failed and we were unable to recover it. 00:24:28.349 [2024-07-12 11:28:54.403624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.349 [2024-07-12 11:28:54.403650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.349 qpair failed and we were unable to recover it. 00:24:28.349 [2024-07-12 11:28:54.403764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.349 [2024-07-12 11:28:54.403789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.349 qpair failed and we were unable to recover it. 00:24:28.349 [2024-07-12 11:28:54.403905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.349 [2024-07-12 11:28:54.403931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.349 qpair failed and we were unable to recover it. 00:24:28.349 [2024-07-12 11:28:54.404048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.349 [2024-07-12 11:28:54.404074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.349 qpair failed and we were unable to recover it. 00:24:28.349 [2024-07-12 11:28:54.404210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.349 [2024-07-12 11:28:54.404236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.349 qpair failed and we were unable to recover it. 00:24:28.349 [2024-07-12 11:28:54.404349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.349 [2024-07-12 11:28:54.404375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.349 qpair failed and we were unable to recover it. 00:24:28.349 [2024-07-12 11:28:54.404491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.349 [2024-07-12 11:28:54.404518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.349 qpair failed and we were unable to recover it. 00:24:28.349 [2024-07-12 11:28:54.404636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.349 [2024-07-12 11:28:54.404664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.349 qpair failed and we were unable to recover it. 00:24:28.349 [2024-07-12 11:28:54.404751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.349 [2024-07-12 11:28:54.404776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.349 qpair failed and we were unable to recover it. 00:24:28.349 [2024-07-12 11:28:54.404859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.349 [2024-07-12 11:28:54.404890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.349 qpair failed and we were unable to recover it. 00:24:28.349 [2024-07-12 11:28:54.405032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.349 [2024-07-12 11:28:54.405058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.349 qpair failed and we were unable to recover it. 00:24:28.349 [2024-07-12 11:28:54.405179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.349 [2024-07-12 11:28:54.405204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.349 qpair failed and we were unable to recover it. 00:24:28.349 [2024-07-12 11:28:54.405285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.349 [2024-07-12 11:28:54.405310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.349 qpair failed and we were unable to recover it. 00:24:28.349 [2024-07-12 11:28:54.405425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.349 [2024-07-12 11:28:54.405451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.349 qpair failed and we were unable to recover it. 00:24:28.349 [2024-07-12 11:28:54.405600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.349 [2024-07-12 11:28:54.405626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.349 qpair failed and we were unable to recover it. 00:24:28.349 [2024-07-12 11:28:54.405737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.349 [2024-07-12 11:28:54.405762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.349 qpair failed and we were unable to recover it. 00:24:28.349 [2024-07-12 11:28:54.405846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.349 [2024-07-12 11:28:54.405881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.349 qpair failed and we were unable to recover it. 00:24:28.349 [2024-07-12 11:28:54.405964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.349 [2024-07-12 11:28:54.405991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.349 qpair failed and we were unable to recover it. 00:24:28.349 [2024-07-12 11:28:54.406096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.349 [2024-07-12 11:28:54.406122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.349 qpair failed and we were unable to recover it. 00:24:28.349 [2024-07-12 11:28:54.406207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.349 [2024-07-12 11:28:54.406233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.349 qpair failed and we were unable to recover it. 00:24:28.349 [2024-07-12 11:28:54.406383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.349 [2024-07-12 11:28:54.406410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.349 qpair failed and we were unable to recover it. 00:24:28.349 [2024-07-12 11:28:54.406523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.349 [2024-07-12 11:28:54.406549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.350 qpair failed and we were unable to recover it. 00:24:28.350 [2024-07-12 11:28:54.406635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.350 [2024-07-12 11:28:54.406661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.350 qpair failed and we were unable to recover it. 00:24:28.350 [2024-07-12 11:28:54.406776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.350 [2024-07-12 11:28:54.406802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.350 qpair failed and we were unable to recover it. 00:24:28.350 [2024-07-12 11:28:54.406929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.350 [2024-07-12 11:28:54.406956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.350 qpair failed and we were unable to recover it. 00:24:28.350 [2024-07-12 11:28:54.407057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.350 [2024-07-12 11:28:54.407083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.350 qpair failed and we were unable to recover it. 00:24:28.350 [2024-07-12 11:28:54.407201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.350 [2024-07-12 11:28:54.407227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.350 qpair failed and we were unable to recover it. 00:24:28.350 [2024-07-12 11:28:54.407313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.350 [2024-07-12 11:28:54.407339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.350 qpair failed and we were unable to recover it. 00:24:28.350 [2024-07-12 11:28:54.407458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.350 [2024-07-12 11:28:54.407485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.350 qpair failed and we were unable to recover it. 00:24:28.350 [2024-07-12 11:28:54.407596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.350 [2024-07-12 11:28:54.407622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.350 qpair failed and we were unable to recover it. 00:24:28.350 [2024-07-12 11:28:54.407742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.350 [2024-07-12 11:28:54.407767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.350 qpair failed and we were unable to recover it. 00:24:28.350 [2024-07-12 11:28:54.407853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.350 [2024-07-12 11:28:54.407885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.350 qpair failed and we were unable to recover it. 00:24:28.350 [2024-07-12 11:28:54.407971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.350 [2024-07-12 11:28:54.407997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.350 qpair failed and we were unable to recover it. 00:24:28.350 [2024-07-12 11:28:54.408111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.350 [2024-07-12 11:28:54.408137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.350 qpair failed and we were unable to recover it. 00:24:28.350 [2024-07-12 11:28:54.408265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.350 [2024-07-12 11:28:54.408292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.350 qpair failed and we were unable to recover it. 00:24:28.350 [2024-07-12 11:28:54.408377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.350 [2024-07-12 11:28:54.408403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.350 qpair failed and we were unable to recover it. 00:24:28.350 [2024-07-12 11:28:54.408515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.350 [2024-07-12 11:28:54.408542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.350 qpair failed and we were unable to recover it. 00:24:28.350 [2024-07-12 11:28:54.408652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.350 [2024-07-12 11:28:54.408678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.350 qpair failed and we were unable to recover it. 00:24:28.350 [2024-07-12 11:28:54.408794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.350 [2024-07-12 11:28:54.408820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.350 qpair failed and we were unable to recover it. 00:24:28.350 [2024-07-12 11:28:54.408916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.350 [2024-07-12 11:28:54.408943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.350 qpair failed and we were unable to recover it. 00:24:28.350 [2024-07-12 11:28:54.409029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.350 [2024-07-12 11:28:54.409056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.350 qpair failed and we were unable to recover it. 00:24:28.350 [2024-07-12 11:28:54.409202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.350 [2024-07-12 11:28:54.409228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.350 qpair failed and we were unable to recover it. 00:24:28.350 [2024-07-12 11:28:54.409348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.350 [2024-07-12 11:28:54.409374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.350 qpair failed and we were unable to recover it. 00:24:28.350 [2024-07-12 11:28:54.409490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.350 [2024-07-12 11:28:54.409516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.350 qpair failed and we were unable to recover it. 00:24:28.350 [2024-07-12 11:28:54.409613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.350 [2024-07-12 11:28:54.409653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.350 qpair failed and we were unable to recover it. 00:24:28.350 [2024-07-12 11:28:54.409753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.350 [2024-07-12 11:28:54.409780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.350 qpair failed and we were unable to recover it. 00:24:28.350 [2024-07-12 11:28:54.409888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.350 [2024-07-12 11:28:54.409915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.350 qpair failed and we were unable to recover it. 00:24:28.350 [2024-07-12 11:28:54.410008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.350 [2024-07-12 11:28:54.410034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.350 qpair failed and we were unable to recover it. 00:24:28.350 [2024-07-12 11:28:54.410110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.350 [2024-07-12 11:28:54.410136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.350 qpair failed and we were unable to recover it. 00:24:28.350 [2024-07-12 11:28:54.410261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.350 [2024-07-12 11:28:54.410288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.350 qpair failed and we were unable to recover it. 00:24:28.350 [2024-07-12 11:28:54.410429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.350 [2024-07-12 11:28:54.410457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.350 qpair failed and we were unable to recover it. 00:24:28.350 [2024-07-12 11:28:54.410548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.350 [2024-07-12 11:28:54.410575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.350 qpair failed and we were unable to recover it. 00:24:28.350 [2024-07-12 11:28:54.410691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.350 [2024-07-12 11:28:54.410718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.350 qpair failed and we were unable to recover it. 00:24:28.350 [2024-07-12 11:28:54.410800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.350 [2024-07-12 11:28:54.410826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.350 qpair failed and we were unable to recover it. 00:24:28.350 [2024-07-12 11:28:54.410957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.350 [2024-07-12 11:28:54.410984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.350 qpair failed and we were unable to recover it. 00:24:28.350 [2024-07-12 11:28:54.411090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.350 [2024-07-12 11:28:54.411116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.350 qpair failed and we were unable to recover it. 00:24:28.350 [2024-07-12 11:28:54.411242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.350 [2024-07-12 11:28:54.411268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.351 qpair failed and we were unable to recover it. 00:24:28.351 [2024-07-12 11:28:54.411350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.351 [2024-07-12 11:28:54.411377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.351 qpair failed and we were unable to recover it. 00:24:28.351 [2024-07-12 11:28:54.411508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.351 [2024-07-12 11:28:54.411548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.351 qpair failed and we were unable to recover it. 00:24:28.351 [2024-07-12 11:28:54.411668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.351 [2024-07-12 11:28:54.411697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.351 qpair failed and we were unable to recover it. 00:24:28.351 [2024-07-12 11:28:54.411815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.351 [2024-07-12 11:28:54.411846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.351 qpair failed and we were unable to recover it. 00:24:28.351 [2024-07-12 11:28:54.411946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.351 [2024-07-12 11:28:54.411973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.351 qpair failed and we were unable to recover it. 00:24:28.351 [2024-07-12 11:28:54.412086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.351 [2024-07-12 11:28:54.412112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.351 qpair failed and we were unable to recover it. 00:24:28.351 [2024-07-12 11:28:54.412228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.351 [2024-07-12 11:28:54.412254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.351 qpair failed and we were unable to recover it. 00:24:28.351 [2024-07-12 11:28:54.412374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.351 [2024-07-12 11:28:54.412401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.351 qpair failed and we were unable to recover it. 00:24:28.351 [2024-07-12 11:28:54.412488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.351 [2024-07-12 11:28:54.412514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.351 qpair failed and we were unable to recover it. 00:24:28.351 [2024-07-12 11:28:54.412659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.351 [2024-07-12 11:28:54.412686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.351 qpair failed and we were unable to recover it. 00:24:28.351 [2024-07-12 11:28:54.412793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.351 [2024-07-12 11:28:54.412819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.351 qpair failed and we were unable to recover it. 00:24:28.351 [2024-07-12 11:28:54.412924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.351 [2024-07-12 11:28:54.412952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.351 qpair failed and we were unable to recover it. 00:24:28.351 [2024-07-12 11:28:54.413045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.351 [2024-07-12 11:28:54.413071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.351 qpair failed and we were unable to recover it. 00:24:28.351 [2024-07-12 11:28:54.413190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.351 [2024-07-12 11:28:54.413216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.351 qpair failed and we were unable to recover it. 00:24:28.351 [2024-07-12 11:28:54.413356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.351 [2024-07-12 11:28:54.413383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.351 qpair failed and we were unable to recover it. 00:24:28.351 [2024-07-12 11:28:54.413498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.351 [2024-07-12 11:28:54.413523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.351 qpair failed and we were unable to recover it. 00:24:28.351 [2024-07-12 11:28:54.413638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.351 [2024-07-12 11:28:54.413666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.351 qpair failed and we were unable to recover it. 00:24:28.351 [2024-07-12 11:28:54.413775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.351 [2024-07-12 11:28:54.413815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.351 qpair failed and we were unable to recover it. 00:24:28.351 [2024-07-12 11:28:54.413927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.351 [2024-07-12 11:28:54.413955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.351 qpair failed and we were unable to recover it. 00:24:28.351 [2024-07-12 11:28:54.414045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.351 [2024-07-12 11:28:54.414070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.351 qpair failed and we were unable to recover it. 00:24:28.351 [2024-07-12 11:28:54.414191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.351 [2024-07-12 11:28:54.414218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.351 qpair failed and we were unable to recover it. 00:24:28.351 [2024-07-12 11:28:54.414356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.351 [2024-07-12 11:28:54.414382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.351 qpair failed and we were unable to recover it. 00:24:28.351 [2024-07-12 11:28:54.414471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.351 [2024-07-12 11:28:54.414497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.351 qpair failed and we were unable to recover it. 00:24:28.351 [2024-07-12 11:28:54.414602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.351 [2024-07-12 11:28:54.414630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.351 qpair failed and we were unable to recover it. 00:24:28.351 [2024-07-12 11:28:54.414728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.351 [2024-07-12 11:28:54.414754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.351 qpair failed and we were unable to recover it. 00:24:28.351 [2024-07-12 11:28:54.414882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.351 [2024-07-12 11:28:54.414909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.351 qpair failed and we were unable to recover it. 00:24:28.351 [2024-07-12 11:28:54.414992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.351 [2024-07-12 11:28:54.415019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.351 qpair failed and we were unable to recover it. 00:24:28.351 [2024-07-12 11:28:54.415128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.351 [2024-07-12 11:28:54.415165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.351 qpair failed and we were unable to recover it. 00:24:28.351 [2024-07-12 11:28:54.415275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.351 [2024-07-12 11:28:54.415301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.351 qpair failed and we were unable to recover it. 00:24:28.351 [2024-07-12 11:28:54.415391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.351 [2024-07-12 11:28:54.415419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.351 qpair failed and we were unable to recover it. 00:24:28.351 [2024-07-12 11:28:54.415516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.351 [2024-07-12 11:28:54.415542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.351 qpair failed and we were unable to recover it. 00:24:28.351 [2024-07-12 11:28:54.415683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.351 [2024-07-12 11:28:54.415709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.351 qpair failed and we were unable to recover it. 00:24:28.351 [2024-07-12 11:28:54.415851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.351 [2024-07-12 11:28:54.415883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.351 qpair failed and we were unable to recover it. 00:24:28.351 [2024-07-12 11:28:54.415974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.351 [2024-07-12 11:28:54.416000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.351 qpair failed and we were unable to recover it. 00:24:28.351 [2024-07-12 11:28:54.416079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.351 [2024-07-12 11:28:54.416104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.351 qpair failed and we were unable to recover it. 00:24:28.351 [2024-07-12 11:28:54.416224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.351 [2024-07-12 11:28:54.416250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.351 qpair failed and we were unable to recover it. 00:24:28.351 [2024-07-12 11:28:54.416341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.351 [2024-07-12 11:28:54.416367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.351 qpair failed and we were unable to recover it. 00:24:28.351 [2024-07-12 11:28:54.416482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.351 [2024-07-12 11:28:54.416508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.351 qpair failed and we were unable to recover it. 00:24:28.351 [2024-07-12 11:28:54.416625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.351 [2024-07-12 11:28:54.416653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.351 qpair failed and we were unable to recover it. 00:24:28.351 [2024-07-12 11:28:54.416738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.351 [2024-07-12 11:28:54.416765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.351 qpair failed and we were unable to recover it. 00:24:28.351 [2024-07-12 11:28:54.416864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.352 [2024-07-12 11:28:54.416910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.352 qpair failed and we were unable to recover it. 00:24:28.352 [2024-07-12 11:28:54.417036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.352 [2024-07-12 11:28:54.417064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.352 qpair failed and we were unable to recover it. 00:24:28.352 [2024-07-12 11:28:54.417157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.352 [2024-07-12 11:28:54.417184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.352 qpair failed and we were unable to recover it. 00:24:28.352 [2024-07-12 11:28:54.417268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.352 [2024-07-12 11:28:54.417295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.352 qpair failed and we were unable to recover it. 00:24:28.352 [2024-07-12 11:28:54.417424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.352 [2024-07-12 11:28:54.417451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.352 qpair failed and we were unable to recover it. 00:24:28.352 [2024-07-12 11:28:54.417569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.352 [2024-07-12 11:28:54.417596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.352 qpair failed and we were unable to recover it. 00:24:28.352 [2024-07-12 11:28:54.417713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.352 [2024-07-12 11:28:54.417739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.352 qpair failed and we were unable to recover it. 00:24:28.352 [2024-07-12 11:28:54.417856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.352 [2024-07-12 11:28:54.417893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.352 qpair failed and we were unable to recover it. 00:24:28.352 [2024-07-12 11:28:54.418017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.352 [2024-07-12 11:28:54.418043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.352 qpair failed and we were unable to recover it. 00:24:28.352 [2024-07-12 11:28:54.418180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.352 [2024-07-12 11:28:54.418206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.352 qpair failed and we were unable to recover it. 00:24:28.352 [2024-07-12 11:28:54.418317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.352 [2024-07-12 11:28:54.418343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.352 qpair failed and we were unable to recover it. 00:24:28.352 [2024-07-12 11:28:54.418428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.352 [2024-07-12 11:28:54.418454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.352 qpair failed and we were unable to recover it. 00:24:28.352 [2024-07-12 11:28:54.418594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.352 [2024-07-12 11:28:54.418620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.352 qpair failed and we were unable to recover it. 00:24:28.352 [2024-07-12 11:28:54.418711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.352 [2024-07-12 11:28:54.418738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.352 qpair failed and we were unable to recover it. 00:24:28.352 [2024-07-12 11:28:54.418853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.352 [2024-07-12 11:28:54.418887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.352 qpair failed and we were unable to recover it. 00:24:28.352 [2024-07-12 11:28:54.418982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.352 [2024-07-12 11:28:54.419010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.352 qpair failed and we were unable to recover it. 00:24:28.352 [2024-07-12 11:28:54.419121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.352 [2024-07-12 11:28:54.419159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.352 qpair failed and we were unable to recover it. 00:24:28.352 [2024-07-12 11:28:54.419264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.352 [2024-07-12 11:28:54.419303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.352 qpair failed and we were unable to recover it. 00:24:28.352 [2024-07-12 11:28:54.419394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.352 [2024-07-12 11:28:54.419421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.352 qpair failed and we were unable to recover it. 00:24:28.352 [2024-07-12 11:28:54.419511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.352 [2024-07-12 11:28:54.419538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.352 qpair failed and we were unable to recover it. 00:24:28.352 [2024-07-12 11:28:54.419683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.352 [2024-07-12 11:28:54.419709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.352 qpair failed and we were unable to recover it. 00:24:28.352 [2024-07-12 11:28:54.419828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.352 [2024-07-12 11:28:54.419871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.352 qpair failed and we were unable to recover it. 00:24:28.352 [2024-07-12 11:28:54.419992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.352 [2024-07-12 11:28:54.420020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.352 qpair failed and we were unable to recover it. 00:24:28.352 [2024-07-12 11:28:54.420134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.352 [2024-07-12 11:28:54.420172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.352 qpair failed and we were unable to recover it. 00:24:28.352 [2024-07-12 11:28:54.420259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.352 [2024-07-12 11:28:54.420285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.352 qpair failed and we were unable to recover it. 00:24:28.352 [2024-07-12 11:28:54.420361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.352 [2024-07-12 11:28:54.420387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.352 qpair failed and we were unable to recover it. 00:24:28.352 [2024-07-12 11:28:54.420509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.352 [2024-07-12 11:28:54.420535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.352 qpair failed and we were unable to recover it. 00:24:28.352 [2024-07-12 11:28:54.420610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.352 [2024-07-12 11:28:54.420636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.352 qpair failed and we were unable to recover it. 00:24:28.352 [2024-07-12 11:28:54.420753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.352 [2024-07-12 11:28:54.420782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.352 qpair failed and we were unable to recover it. 00:24:28.352 [2024-07-12 11:28:54.420928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.352 [2024-07-12 11:28:54.420957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.352 qpair failed and we were unable to recover it. 00:24:28.352 [2024-07-12 11:28:54.421047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.352 [2024-07-12 11:28:54.421073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.352 qpair failed and we were unable to recover it. 00:24:28.352 [2024-07-12 11:28:54.421194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.352 [2024-07-12 11:28:54.421221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.352 qpair failed and we were unable to recover it. 00:24:28.352 [2024-07-12 11:28:54.421362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.352 [2024-07-12 11:28:54.421388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.352 qpair failed and we were unable to recover it. 00:24:28.352 [2024-07-12 11:28:54.421480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.352 [2024-07-12 11:28:54.421507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.352 qpair failed and we were unable to recover it. 00:24:28.352 [2024-07-12 11:28:54.421623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.352 [2024-07-12 11:28:54.421650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.352 qpair failed and we were unable to recover it. 00:24:28.352 [2024-07-12 11:28:54.421727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.352 [2024-07-12 11:28:54.421753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.352 qpair failed and we were unable to recover it. 00:24:28.352 [2024-07-12 11:28:54.421846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.352 [2024-07-12 11:28:54.421903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.352 qpair failed and we were unable to recover it. 00:24:28.352 [2024-07-12 11:28:54.422026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.352 [2024-07-12 11:28:54.422054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.352 qpair failed and we were unable to recover it. 00:24:28.352 [2024-07-12 11:28:54.422150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.352 [2024-07-12 11:28:54.422183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.352 qpair failed and we were unable to recover it. 00:24:28.352 [2024-07-12 11:28:54.422266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.352 [2024-07-12 11:28:54.422292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.352 qpair failed and we were unable to recover it. 00:24:28.352 [2024-07-12 11:28:54.422367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.352 [2024-07-12 11:28:54.422393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.353 qpair failed and we were unable to recover it. 00:24:28.353 [2024-07-12 11:28:54.422482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.353 [2024-07-12 11:28:54.422509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.353 qpair failed and we were unable to recover it. 00:24:28.353 [2024-07-12 11:28:54.422596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.353 [2024-07-12 11:28:54.422623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.353 qpair failed and we were unable to recover it. 00:24:28.353 [2024-07-12 11:28:54.422711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.353 [2024-07-12 11:28:54.422738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.353 qpair failed and we were unable to recover it. 00:24:28.353 [2024-07-12 11:28:54.422854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.353 [2024-07-12 11:28:54.422888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.353 qpair failed and we were unable to recover it. 00:24:28.353 [2024-07-12 11:28:54.423015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.353 [2024-07-12 11:28:54.423041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.353 qpair failed and we were unable to recover it. 00:24:28.353 [2024-07-12 11:28:54.423158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.353 [2024-07-12 11:28:54.423186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.353 qpair failed and we were unable to recover it. 00:24:28.353 [2024-07-12 11:28:54.423304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.353 [2024-07-12 11:28:54.423331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.353 qpair failed and we were unable to recover it. 00:24:28.353 [2024-07-12 11:28:54.423466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.353 [2024-07-12 11:28:54.423492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.353 qpair failed and we were unable to recover it. 00:24:28.353 [2024-07-12 11:28:54.423634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.353 [2024-07-12 11:28:54.423660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.353 qpair failed and we were unable to recover it. 00:24:28.353 [2024-07-12 11:28:54.423738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.353 [2024-07-12 11:28:54.423764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.353 qpair failed and we were unable to recover it. 00:24:28.353 [2024-07-12 11:28:54.423885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.353 [2024-07-12 11:28:54.423913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.353 qpair failed and we were unable to recover it. 00:24:28.353 [2024-07-12 11:28:54.424008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.353 [2024-07-12 11:28:54.424035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.353 qpair failed and we were unable to recover it. 00:24:28.353 [2024-07-12 11:28:54.424190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.353 [2024-07-12 11:28:54.424229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.353 qpair failed and we were unable to recover it. 00:24:28.353 [2024-07-12 11:28:54.424321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.353 [2024-07-12 11:28:54.424349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.353 qpair failed and we were unable to recover it. 00:24:28.353 [2024-07-12 11:28:54.424470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.353 [2024-07-12 11:28:54.424497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.353 qpair failed and we were unable to recover it. 00:24:28.353 [2024-07-12 11:28:54.424585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.353 [2024-07-12 11:28:54.424611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.353 qpair failed and we were unable to recover it. 00:24:28.353 [2024-07-12 11:28:54.424696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.353 [2024-07-12 11:28:54.424728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.353 qpair failed and we were unable to recover it. 00:24:28.353 [2024-07-12 11:28:54.424879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.353 [2024-07-12 11:28:54.424907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.353 qpair failed and we were unable to recover it. 00:24:28.353 [2024-07-12 11:28:54.425028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.353 [2024-07-12 11:28:54.425055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.353 qpair failed and we were unable to recover it. 00:24:28.353 [2024-07-12 11:28:54.425143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.353 [2024-07-12 11:28:54.425176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.353 qpair failed and we were unable to recover it. 00:24:28.353 [2024-07-12 11:28:54.425319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.353 [2024-07-12 11:28:54.425346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.353 qpair failed and we were unable to recover it. 00:24:28.353 [2024-07-12 11:28:54.425465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.353 [2024-07-12 11:28:54.425493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.353 qpair failed and we were unable to recover it. 00:24:28.353 [2024-07-12 11:28:54.425634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.353 [2024-07-12 11:28:54.425660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.353 qpair failed and we were unable to recover it. 00:24:28.353 [2024-07-12 11:28:54.425740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.353 [2024-07-12 11:28:54.425766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.353 qpair failed and we were unable to recover it. 00:24:28.353 [2024-07-12 11:28:54.425852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.353 [2024-07-12 11:28:54.425889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.353 qpair failed and we were unable to recover it. 00:24:28.353 [2024-07-12 11:28:54.425977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.353 [2024-07-12 11:28:54.426003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.353 qpair failed and we were unable to recover it. 00:24:28.353 [2024-07-12 11:28:54.426118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.353 [2024-07-12 11:28:54.426144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.353 qpair failed and we were unable to recover it. 00:24:28.353 [2024-07-12 11:28:54.426234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.353 [2024-07-12 11:28:54.426262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.353 qpair failed and we were unable to recover it. 00:24:28.353 [2024-07-12 11:28:54.426379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.353 [2024-07-12 11:28:54.426405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.353 qpair failed and we were unable to recover it. 00:24:28.353 [2024-07-12 11:28:54.426518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.353 [2024-07-12 11:28:54.426545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.353 qpair failed and we were unable to recover it. 00:24:28.353 [2024-07-12 11:28:54.426663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.353 [2024-07-12 11:28:54.426690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.353 qpair failed and we were unable to recover it. 00:24:28.353 [2024-07-12 11:28:54.426780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.353 [2024-07-12 11:28:54.426806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.353 qpair failed and we were unable to recover it. 00:24:28.353 [2024-07-12 11:28:54.426919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.353 [2024-07-12 11:28:54.426960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.353 qpair failed and we were unable to recover it. 00:24:28.353 [2024-07-12 11:28:54.427054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.353 [2024-07-12 11:28:54.427082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.353 qpair failed and we were unable to recover it. 00:24:28.353 [2024-07-12 11:28:54.427199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.353 [2024-07-12 11:28:54.427227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.353 qpair failed and we were unable to recover it. 00:24:28.353 [2024-07-12 11:28:54.427341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.353 [2024-07-12 11:28:54.427367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.353 qpair failed and we were unable to recover it. 00:24:28.353 [2024-07-12 11:28:54.427473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.353 [2024-07-12 11:28:54.427499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.353 qpair failed and we were unable to recover it. 00:24:28.353 [2024-07-12 11:28:54.427575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.353 [2024-07-12 11:28:54.427601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.353 qpair failed and we were unable to recover it. 00:24:28.353 [2024-07-12 11:28:54.427711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.353 [2024-07-12 11:28:54.427737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.353 qpair failed and we were unable to recover it. 00:24:28.353 [2024-07-12 11:28:54.427881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.353 [2024-07-12 11:28:54.427908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.353 qpair failed and we were unable to recover it. 00:24:28.354 [2024-07-12 11:28:54.427999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.354 [2024-07-12 11:28:54.428026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.354 qpair failed and we were unable to recover it. 00:24:28.354 [2024-07-12 11:28:54.428121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.354 [2024-07-12 11:28:54.428158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.354 qpair failed and we were unable to recover it. 00:24:28.354 [2024-07-12 11:28:54.428165] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:28.354 [2024-07-12 11:28:54.428272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.354 [2024-07-12 11:28:54.428299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.637 qpair failed and we were unable to recover it. 00:24:28.637 [2024-07-12 11:28:54.428411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.637 [2024-07-12 11:28:54.428437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.637 qpair failed and we were unable to recover it. 00:24:28.637 [2024-07-12 11:28:54.428536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.637 [2024-07-12 11:28:54.428564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.637 qpair failed and we were unable to recover it. 00:24:28.637 [2024-07-12 11:28:54.428677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.637 [2024-07-12 11:28:54.428703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.637 qpair failed and we were unable to recover it. 00:24:28.637 [2024-07-12 11:28:54.428811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.637 [2024-07-12 11:28:54.428860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.637 qpair failed and we were unable to recover it. 00:24:28.637 [2024-07-12 11:28:54.428959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.637 [2024-07-12 11:28:54.428987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.637 qpair failed and we were unable to recover it. 00:24:28.637 [2024-07-12 11:28:54.429081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.637 [2024-07-12 11:28:54.429111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.637 qpair failed and we were unable to recover it. 00:24:28.637 [2024-07-12 11:28:54.429242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.637 [2024-07-12 11:28:54.429268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.637 qpair failed and we were unable to recover it. 00:24:28.637 [2024-07-12 11:28:54.429358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.637 [2024-07-12 11:28:54.429386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.637 qpair failed and we were unable to recover it. 00:24:28.637 [2024-07-12 11:28:54.429474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.637 [2024-07-12 11:28:54.429500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.637 qpair failed and we were unable to recover it. 00:24:28.637 [2024-07-12 11:28:54.429611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.637 [2024-07-12 11:28:54.429637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.637 qpair failed and we were unable to recover it. 00:24:28.637 [2024-07-12 11:28:54.429753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.637 [2024-07-12 11:28:54.429779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.637 qpair failed and we were unable to recover it. 00:24:28.637 [2024-07-12 11:28:54.429930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.637 [2024-07-12 11:28:54.429970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.637 qpair failed and we were unable to recover it. 00:24:28.637 [2024-07-12 11:28:54.430095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.637 [2024-07-12 11:28:54.430123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.637 qpair failed and we were unable to recover it. 00:24:28.637 [2024-07-12 11:28:54.430271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.637 [2024-07-12 11:28:54.430300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.637 qpair failed and we were unable to recover it. 00:24:28.637 [2024-07-12 11:28:54.430384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.637 [2024-07-12 11:28:54.430410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.637 qpair failed and we were unable to recover it. 00:24:28.637 [2024-07-12 11:28:54.430525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.637 [2024-07-12 11:28:54.430550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.637 qpair failed and we were unable to recover it. 00:24:28.637 [2024-07-12 11:28:54.430631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.637 [2024-07-12 11:28:54.430659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.637 qpair failed and we were unable to recover it. 00:24:28.637 [2024-07-12 11:28:54.430809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.637 [2024-07-12 11:28:54.430835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.637 qpair failed and we were unable to recover it. 00:24:28.637 [2024-07-12 11:28:54.430958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.637 [2024-07-12 11:28:54.430986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.637 qpair failed and we were unable to recover it. 00:24:28.637 [2024-07-12 11:28:54.431069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.637 [2024-07-12 11:28:54.431096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.637 qpair failed and we were unable to recover it. 00:24:28.637 [2024-07-12 11:28:54.431211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.637 [2024-07-12 11:28:54.431239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.637 qpair failed and we were unable to recover it. 00:24:28.637 [2024-07-12 11:28:54.431325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.637 [2024-07-12 11:28:54.431351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.637 qpair failed and we were unable to recover it. 00:24:28.637 [2024-07-12 11:28:54.431436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.637 [2024-07-12 11:28:54.431462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.637 qpair failed and we were unable to recover it. 00:24:28.638 [2024-07-12 11:28:54.431556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.638 [2024-07-12 11:28:54.431588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.638 qpair failed and we were unable to recover it. 00:24:28.638 [2024-07-12 11:28:54.431681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.638 [2024-07-12 11:28:54.431708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.638 qpair failed and we were unable to recover it. 00:24:28.638 [2024-07-12 11:28:54.431830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.638 [2024-07-12 11:28:54.431856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.638 qpair failed and we were unable to recover it. 00:24:28.638 [2024-07-12 11:28:54.431988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.638 [2024-07-12 11:28:54.432016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.638 qpair failed and we were unable to recover it. 00:24:28.638 [2024-07-12 11:28:54.432165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.638 [2024-07-12 11:28:54.432192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.638 qpair failed and we were unable to recover it. 00:24:28.638 [2024-07-12 11:28:54.432282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.638 [2024-07-12 11:28:54.432308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.638 qpair failed and we were unable to recover it. 00:24:28.638 [2024-07-12 11:28:54.432396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.638 [2024-07-12 11:28:54.432422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.638 qpair failed and we were unable to recover it. 00:24:28.638 [2024-07-12 11:28:54.432502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.638 [2024-07-12 11:28:54.432530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.638 qpair failed and we were unable to recover it. 00:24:28.638 [2024-07-12 11:28:54.432637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.638 [2024-07-12 11:28:54.432678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.638 qpair failed and we were unable to recover it. 00:24:28.638 [2024-07-12 11:28:54.432767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.638 [2024-07-12 11:28:54.432794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.638 qpair failed and we were unable to recover it. 00:24:28.638 [2024-07-12 11:28:54.432944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.638 [2024-07-12 11:28:54.432972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.638 qpair failed and we were unable to recover it. 00:24:28.638 [2024-07-12 11:28:54.433067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.638 [2024-07-12 11:28:54.433093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.638 qpair failed and we were unable to recover it. 00:24:28.638 [2024-07-12 11:28:54.433184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.638 [2024-07-12 11:28:54.433212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.638 qpair failed and we were unable to recover it. 00:24:28.638 [2024-07-12 11:28:54.433333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.638 [2024-07-12 11:28:54.433359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.638 qpair failed and we were unable to recover it. 00:24:28.638 [2024-07-12 11:28:54.433475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.638 [2024-07-12 11:28:54.433503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.638 qpair failed and we were unable to recover it. 00:24:28.638 [2024-07-12 11:28:54.433603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.638 [2024-07-12 11:28:54.433632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.638 qpair failed and we were unable to recover it. 00:24:28.638 [2024-07-12 11:28:54.433766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.638 [2024-07-12 11:28:54.433806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.638 qpair failed and we were unable to recover it. 00:24:28.638 [2024-07-12 11:28:54.433906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.638 [2024-07-12 11:28:54.433933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.638 qpair failed and we were unable to recover it. 00:24:28.638 [2024-07-12 11:28:54.434072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.638 [2024-07-12 11:28:54.434098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.638 qpair failed and we were unable to recover it. 00:24:28.638 [2024-07-12 11:28:54.434214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.638 [2024-07-12 11:28:54.434240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.638 qpair failed and we were unable to recover it. 00:24:28.638 [2024-07-12 11:28:54.434347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.638 [2024-07-12 11:28:54.434374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.638 qpair failed and we were unable to recover it. 00:24:28.638 [2024-07-12 11:28:54.434512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.638 [2024-07-12 11:28:54.434538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.638 qpair failed and we were unable to recover it. 00:24:28.638 [2024-07-12 11:28:54.434631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.638 [2024-07-12 11:28:54.434658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.638 qpair failed and we were unable to recover it. 00:24:28.638 [2024-07-12 11:28:54.434800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.638 [2024-07-12 11:28:54.434840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.638 qpair failed and we were unable to recover it. 00:24:28.638 [2024-07-12 11:28:54.434967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.638 [2024-07-12 11:28:54.434995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.638 qpair failed and we were unable to recover it. 00:24:28.638 [2024-07-12 11:28:54.435097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.638 [2024-07-12 11:28:54.435125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.638 qpair failed and we were unable to recover it. 00:24:28.638 [2024-07-12 11:28:54.435238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.638 [2024-07-12 11:28:54.435265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.638 qpair failed and we were unable to recover it. 00:24:28.638 [2024-07-12 11:28:54.435345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.638 [2024-07-12 11:28:54.435372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.638 qpair failed and we were unable to recover it. 00:24:28.638 [2024-07-12 11:28:54.435483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.638 [2024-07-12 11:28:54.435510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.638 qpair failed and we were unable to recover it. 00:24:28.638 [2024-07-12 11:28:54.435619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.638 [2024-07-12 11:28:54.435659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.638 qpair failed and we were unable to recover it. 00:24:28.638 [2024-07-12 11:28:54.435787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.638 [2024-07-12 11:28:54.435822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.638 qpair failed and we were unable to recover it. 00:24:28.638 [2024-07-12 11:28:54.435953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.638 [2024-07-12 11:28:54.435981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.638 qpair failed and we were unable to recover it. 00:24:28.638 [2024-07-12 11:28:54.436105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.638 [2024-07-12 11:28:54.436131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.638 qpair failed and we were unable to recover it. 00:24:28.638 [2024-07-12 11:28:54.436218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.638 [2024-07-12 11:28:54.436245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.638 qpair failed and we were unable to recover it. 00:24:28.638 [2024-07-12 11:28:54.436363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.638 [2024-07-12 11:28:54.436390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.638 qpair failed and we were unable to recover it. 00:24:28.638 [2024-07-12 11:28:54.436530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.638 [2024-07-12 11:28:54.436558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.638 qpair failed and we were unable to recover it. 00:24:28.638 [2024-07-12 11:28:54.436672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.638 [2024-07-12 11:28:54.436712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.638 qpair failed and we were unable to recover it. 00:24:28.638 [2024-07-12 11:28:54.436848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.638 [2024-07-12 11:28:54.436898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.638 qpair failed and we were unable to recover it. 00:24:28.638 [2024-07-12 11:28:54.437001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.638 [2024-07-12 11:28:54.437030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.638 qpair failed and we were unable to recover it. 00:24:28.638 [2024-07-12 11:28:54.437180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.638 [2024-07-12 11:28:54.437207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.638 qpair failed and we were unable to recover it. 00:24:28.639 [2024-07-12 11:28:54.437318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.639 [2024-07-12 11:28:54.437344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.639 qpair failed and we were unable to recover it. 00:24:28.639 [2024-07-12 11:28:54.437491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.639 [2024-07-12 11:28:54.437517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.639 qpair failed and we were unable to recover it. 00:24:28.639 [2024-07-12 11:28:54.437637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.639 [2024-07-12 11:28:54.437663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.639 qpair failed and we were unable to recover it. 00:24:28.639 [2024-07-12 11:28:54.437793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.639 [2024-07-12 11:28:54.437833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.639 qpair failed and we were unable to recover it. 00:24:28.639 [2024-07-12 11:28:54.437952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.639 [2024-07-12 11:28:54.437980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.639 qpair failed and we were unable to recover it. 00:24:28.639 [2024-07-12 11:28:54.438072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.639 [2024-07-12 11:28:54.438100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.639 qpair failed and we were unable to recover it. 00:24:28.639 [2024-07-12 11:28:54.438196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.639 [2024-07-12 11:28:54.438223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.639 qpair failed and we were unable to recover it. 00:24:28.639 [2024-07-12 11:28:54.438332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.639 [2024-07-12 11:28:54.438359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.639 qpair failed and we were unable to recover it. 00:24:28.639 [2024-07-12 11:28:54.438505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.639 [2024-07-12 11:28:54.438531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.639 qpair failed and we were unable to recover it. 00:24:28.639 [2024-07-12 11:28:54.438674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.639 [2024-07-12 11:28:54.438700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.639 qpair failed and we were unable to recover it. 00:24:28.639 [2024-07-12 11:28:54.438814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.639 [2024-07-12 11:28:54.438844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.639 qpair failed and we were unable to recover it. 00:24:28.639 [2024-07-12 11:28:54.438974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.639 [2024-07-12 11:28:54.439002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.639 qpair failed and we were unable to recover it. 00:24:28.639 [2024-07-12 11:28:54.439114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.639 [2024-07-12 11:28:54.439140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.639 qpair failed and we were unable to recover it. 00:24:28.639 [2024-07-12 11:28:54.439225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.639 [2024-07-12 11:28:54.439251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.639 qpair failed and we were unable to recover it. 00:24:28.639 [2024-07-12 11:28:54.439340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.639 [2024-07-12 11:28:54.439367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.639 qpair failed and we were unable to recover it. 00:24:28.639 [2024-07-12 11:28:54.439522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.639 [2024-07-12 11:28:54.439547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.639 qpair failed and we were unable to recover it. 00:24:28.639 [2024-07-12 11:28:54.439638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.639 [2024-07-12 11:28:54.439666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.639 qpair failed and we were unable to recover it. 00:24:28.639 [2024-07-12 11:28:54.439810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.639 [2024-07-12 11:28:54.439841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.639 qpair failed and we were unable to recover it. 00:24:28.639 [2024-07-12 11:28:54.439950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.639 [2024-07-12 11:28:54.439979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.639 qpair failed and we were unable to recover it. 00:24:28.639 [2024-07-12 11:28:54.440075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.639 [2024-07-12 11:28:54.440103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.639 qpair failed and we were unable to recover it. 00:24:28.639 [2024-07-12 11:28:54.440249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.639 [2024-07-12 11:28:54.440276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.639 qpair failed and we were unable to recover it. 00:24:28.639 [2024-07-12 11:28:54.440361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.639 [2024-07-12 11:28:54.440388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.639 qpair failed and we were unable to recover it. 00:24:28.639 [2024-07-12 11:28:54.440477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.639 [2024-07-12 11:28:54.440505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.639 qpair failed and we were unable to recover it. 00:24:28.639 [2024-07-12 11:28:54.440588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.639 [2024-07-12 11:28:54.440614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.639 qpair failed and we were unable to recover it. 00:24:28.639 [2024-07-12 11:28:54.440773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.639 [2024-07-12 11:28:54.440799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.639 qpair failed and we were unable to recover it. 00:24:28.639 [2024-07-12 11:28:54.440915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.639 [2024-07-12 11:28:54.440942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.639 qpair failed and we were unable to recover it. 00:24:28.639 [2024-07-12 11:28:54.441050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.639 [2024-07-12 11:28:54.441075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.639 qpair failed and we were unable to recover it. 00:24:28.639 [2024-07-12 11:28:54.441162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.639 [2024-07-12 11:28:54.441189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.639 qpair failed and we were unable to recover it. 00:24:28.639 [2024-07-12 11:28:54.441328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.639 [2024-07-12 11:28:54.441354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.639 qpair failed and we were unable to recover it. 00:24:28.639 [2024-07-12 11:28:54.441437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.639 [2024-07-12 11:28:54.441463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.639 qpair failed and we were unable to recover it. 00:24:28.639 [2024-07-12 11:28:54.441608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.639 [2024-07-12 11:28:54.441637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.639 qpair failed and we were unable to recover it. 00:24:28.639 [2024-07-12 11:28:54.441762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.639 [2024-07-12 11:28:54.441789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.639 qpair failed and we were unable to recover it. 00:24:28.639 [2024-07-12 11:28:54.441882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.639 [2024-07-12 11:28:54.441909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.639 qpair failed and we were unable to recover it. 00:24:28.639 [2024-07-12 11:28:54.442026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.639 [2024-07-12 11:28:54.442053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.639 qpair failed and we were unable to recover it. 00:24:28.639 [2024-07-12 11:28:54.442176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.639 [2024-07-12 11:28:54.442203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.639 qpair failed and we were unable to recover it. 00:24:28.639 [2024-07-12 11:28:54.442322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.639 [2024-07-12 11:28:54.442348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.639 qpair failed and we were unable to recover it. 00:24:28.639 [2024-07-12 11:28:54.442465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.639 [2024-07-12 11:28:54.442491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.639 qpair failed and we were unable to recover it. 00:24:28.639 [2024-07-12 11:28:54.442611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.639 [2024-07-12 11:28:54.442638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.639 qpair failed and we were unable to recover it. 00:24:28.639 [2024-07-12 11:28:54.442757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.639 [2024-07-12 11:28:54.442785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.639 qpair failed and we were unable to recover it. 00:24:28.639 [2024-07-12 11:28:54.442880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.639 [2024-07-12 11:28:54.442908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.640 qpair failed and we were unable to recover it. 00:24:28.640 [2024-07-12 11:28:54.443005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.640 [2024-07-12 11:28:54.443032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.640 qpair failed and we were unable to recover it. 00:24:28.640 [2024-07-12 11:28:54.443124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.640 [2024-07-12 11:28:54.443150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.640 qpair failed and we were unable to recover it. 00:24:28.640 [2024-07-12 11:28:54.443231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.640 [2024-07-12 11:28:54.443257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.640 qpair failed and we were unable to recover it. 00:24:28.640 [2024-07-12 11:28:54.443374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.640 [2024-07-12 11:28:54.443399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.640 qpair failed and we were unable to recover it. 00:24:28.640 [2024-07-12 11:28:54.443480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.640 [2024-07-12 11:28:54.443510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.640 qpair failed and we were unable to recover it. 00:24:28.640 [2024-07-12 11:28:54.443625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.640 [2024-07-12 11:28:54.443651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.640 qpair failed and we were unable to recover it. 00:24:28.640 [2024-07-12 11:28:54.443732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.640 [2024-07-12 11:28:54.443758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.640 qpair failed and we were unable to recover it. 00:24:28.640 [2024-07-12 11:28:54.443911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.640 [2024-07-12 11:28:54.443938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.640 qpair failed and we were unable to recover it. 00:24:28.640 [2024-07-12 11:28:54.444043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.640 [2024-07-12 11:28:54.444069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.640 qpair failed and we were unable to recover it. 00:24:28.640 [2024-07-12 11:28:54.444183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.640 [2024-07-12 11:28:54.444210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.640 qpair failed and we were unable to recover it. 00:24:28.640 [2024-07-12 11:28:54.444321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.640 [2024-07-12 11:28:54.444348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.640 qpair failed and we were unable to recover it. 00:24:28.640 [2024-07-12 11:28:54.444431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.640 [2024-07-12 11:28:54.444458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.640 qpair failed and we were unable to recover it. 00:24:28.640 [2024-07-12 11:28:54.444589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.640 [2024-07-12 11:28:54.444629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.640 qpair failed and we were unable to recover it. 00:24:28.640 [2024-07-12 11:28:54.444756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.640 [2024-07-12 11:28:54.444786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.640 qpair failed and we were unable to recover it. 00:24:28.640 [2024-07-12 11:28:54.444907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.640 [2024-07-12 11:28:54.444937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.640 qpair failed and we were unable to recover it. 00:24:28.640 [2024-07-12 11:28:54.445036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.640 [2024-07-12 11:28:54.445064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.640 qpair failed and we were unable to recover it. 00:24:28.640 [2024-07-12 11:28:54.445146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.640 [2024-07-12 11:28:54.445173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.640 qpair failed and we were unable to recover it. 00:24:28.640 [2024-07-12 11:28:54.445309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.640 [2024-07-12 11:28:54.445335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.640 qpair failed and we were unable to recover it. 00:24:28.640 [2024-07-12 11:28:54.445452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.640 [2024-07-12 11:28:54.445479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.640 qpair failed and we were unable to recover it. 00:24:28.640 [2024-07-12 11:28:54.445616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.640 [2024-07-12 11:28:54.445643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.640 qpair failed and we were unable to recover it. 00:24:28.640 [2024-07-12 11:28:54.445742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.640 [2024-07-12 11:28:54.445783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.640 qpair failed and we were unable to recover it. 00:24:28.640 [2024-07-12 11:28:54.445933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.640 [2024-07-12 11:28:54.445960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.640 qpair failed and we were unable to recover it. 00:24:28.640 [2024-07-12 11:28:54.446058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.640 [2024-07-12 11:28:54.446084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.640 qpair failed and we were unable to recover it. 00:24:28.640 [2024-07-12 11:28:54.446167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.640 [2024-07-12 11:28:54.446192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.640 qpair failed and we were unable to recover it. 00:24:28.640 [2024-07-12 11:28:54.446301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.640 [2024-07-12 11:28:54.446327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.640 qpair failed and we were unable to recover it. 00:24:28.640 [2024-07-12 11:28:54.446440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.640 [2024-07-12 11:28:54.446466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.640 qpair failed and we were unable to recover it. 00:24:28.640 [2024-07-12 11:28:54.446543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.640 [2024-07-12 11:28:54.446569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.640 qpair failed and we were unable to recover it. 00:24:28.640 [2024-07-12 11:28:54.446648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.640 [2024-07-12 11:28:54.446675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.640 qpair failed and we were unable to recover it. 00:24:28.640 [2024-07-12 11:28:54.446773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.640 [2024-07-12 11:28:54.446814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.640 qpair failed and we were unable to recover it. 00:24:28.640 [2024-07-12 11:28:54.446913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.640 [2024-07-12 11:28:54.446943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.640 qpair failed and we were unable to recover it. 00:24:28.640 [2024-07-12 11:28:54.447065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.640 [2024-07-12 11:28:54.447092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.640 qpair failed and we were unable to recover it. 00:24:28.640 [2024-07-12 11:28:54.447215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.640 [2024-07-12 11:28:54.447243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.640 qpair failed and we were unable to recover it. 00:24:28.640 [2024-07-12 11:28:54.447360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.640 [2024-07-12 11:28:54.447386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.640 qpair failed and we were unable to recover it. 00:24:28.640 [2024-07-12 11:28:54.447465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.640 [2024-07-12 11:28:54.447491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.640 qpair failed and we were unable to recover it. 00:24:28.640 [2024-07-12 11:28:54.447586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.640 [2024-07-12 11:28:54.447612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.640 qpair failed and we were unable to recover it. 00:24:28.640 [2024-07-12 11:28:54.447699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.640 [2024-07-12 11:28:54.447727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.640 qpair failed and we were unable to recover it. 00:24:28.640 [2024-07-12 11:28:54.447826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.640 [2024-07-12 11:28:54.447881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.640 qpair failed and we were unable to recover it. 00:24:28.640 [2024-07-12 11:28:54.447985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.640 [2024-07-12 11:28:54.448014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.640 qpair failed and we were unable to recover it. 00:24:28.640 [2024-07-12 11:28:54.448164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.640 [2024-07-12 11:28:54.448190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.640 qpair failed and we were unable to recover it. 00:24:28.640 [2024-07-12 11:28:54.448302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.640 [2024-07-12 11:28:54.448329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.640 qpair failed and we were unable to recover it. 00:24:28.640 [2024-07-12 11:28:54.448473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.641 [2024-07-12 11:28:54.448500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.641 qpair failed and we were unable to recover it. 00:24:28.641 [2024-07-12 11:28:54.448618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.641 [2024-07-12 11:28:54.448646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.641 qpair failed and we were unable to recover it. 00:24:28.641 [2024-07-12 11:28:54.448763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.641 [2024-07-12 11:28:54.448792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.641 qpair failed and we were unable to recover it. 00:24:28.641 [2024-07-12 11:28:54.448906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.641 [2024-07-12 11:28:54.448936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.641 qpair failed and we were unable to recover it. 00:24:28.641 [2024-07-12 11:28:54.449025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.641 [2024-07-12 11:28:54.449052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.641 qpair failed and we were unable to recover it. 00:24:28.641 [2024-07-12 11:28:54.449170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.641 [2024-07-12 11:28:54.449197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.641 qpair failed and we were unable to recover it. 00:24:28.641 [2024-07-12 11:28:54.449280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.641 [2024-07-12 11:28:54.449306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.641 qpair failed and we were unable to recover it. 00:24:28.641 [2024-07-12 11:28:54.449389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.641 [2024-07-12 11:28:54.449416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.641 qpair failed and we were unable to recover it. 00:24:28.641 [2024-07-12 11:28:54.449528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.641 [2024-07-12 11:28:54.449555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.641 qpair failed and we were unable to recover it. 00:24:28.641 [2024-07-12 11:28:54.449682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.641 [2024-07-12 11:28:54.449723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.641 qpair failed and we were unable to recover it. 00:24:28.641 [2024-07-12 11:28:54.449838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.641 [2024-07-12 11:28:54.449873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.641 qpair failed and we were unable to recover it. 00:24:28.641 [2024-07-12 11:28:54.449958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.641 [2024-07-12 11:28:54.449985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.641 qpair failed and we were unable to recover it. 00:24:28.641 [2024-07-12 11:28:54.450095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.641 [2024-07-12 11:28:54.450121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.641 qpair failed and we were unable to recover it. 00:24:28.641 [2024-07-12 11:28:54.450207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.641 [2024-07-12 11:28:54.450235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.641 qpair failed and we were unable to recover it. 00:24:28.641 [2024-07-12 11:28:54.450326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.641 [2024-07-12 11:28:54.450354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.641 qpair failed and we were unable to recover it. 00:24:28.641 [2024-07-12 11:28:54.450468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.641 [2024-07-12 11:28:54.450498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.641 qpair failed and we were unable to recover it. 00:24:28.641 [2024-07-12 11:28:54.450613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.641 [2024-07-12 11:28:54.450641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.641 qpair failed and we were unable to recover it. 00:24:28.641 [2024-07-12 11:28:54.450785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.641 [2024-07-12 11:28:54.450812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.641 qpair failed and we were unable to recover it. 00:24:28.641 [2024-07-12 11:28:54.450921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.641 [2024-07-12 11:28:54.450948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.641 qpair failed and we were unable to recover it. 00:24:28.641 [2024-07-12 11:28:54.451033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.641 [2024-07-12 11:28:54.451060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.641 qpair failed and we were unable to recover it. 00:24:28.641 [2024-07-12 11:28:54.451175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.641 [2024-07-12 11:28:54.451201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.641 qpair failed and we were unable to recover it. 00:24:28.641 [2024-07-12 11:28:54.451286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.641 [2024-07-12 11:28:54.451312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.641 qpair failed and we were unable to recover it. 00:24:28.641 [2024-07-12 11:28:54.451421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.641 [2024-07-12 11:28:54.451448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.641 qpair failed and we were unable to recover it. 00:24:28.641 [2024-07-12 11:28:54.451529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.641 [2024-07-12 11:28:54.451559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.641 qpair failed and we were unable to recover it. 00:24:28.641 [2024-07-12 11:28:54.451676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.641 [2024-07-12 11:28:54.451703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.641 qpair failed and we were unable to recover it. 00:24:28.641 [2024-07-12 11:28:54.451792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.641 [2024-07-12 11:28:54.451820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.641 qpair failed and we were unable to recover it. 00:24:28.641 [2024-07-12 11:28:54.451921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.641 [2024-07-12 11:28:54.451949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.641 qpair failed and we were unable to recover it. 00:24:28.641 [2024-07-12 11:28:54.452077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.641 [2024-07-12 11:28:54.452104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.641 qpair failed and we were unable to recover it. 00:24:28.641 [2024-07-12 11:28:54.452196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.641 [2024-07-12 11:28:54.452223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.641 qpair failed and we were unable to recover it. 00:24:28.641 [2024-07-12 11:28:54.452304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.641 [2024-07-12 11:28:54.452330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.641 qpair failed and we were unable to recover it. 00:24:28.641 [2024-07-12 11:28:54.452449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.641 [2024-07-12 11:28:54.452475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.641 qpair failed and we were unable to recover it. 00:24:28.641 [2024-07-12 11:28:54.452561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.641 [2024-07-12 11:28:54.452594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.641 qpair failed and we were unable to recover it. 00:24:28.641 [2024-07-12 11:28:54.452691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.641 [2024-07-12 11:28:54.452719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.641 qpair failed and we were unable to recover it. 00:24:28.642 [2024-07-12 11:28:54.452821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.642 [2024-07-12 11:28:54.452861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.642 qpair failed and we were unable to recover it. 00:24:28.642 [2024-07-12 11:28:54.452982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.642 [2024-07-12 11:28:54.453023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.642 qpair failed and we were unable to recover it. 00:24:28.642 [2024-07-12 11:28:54.453142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.642 [2024-07-12 11:28:54.453170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.642 qpair failed and we were unable to recover it. 00:24:28.642 [2024-07-12 11:28:54.453288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.642 [2024-07-12 11:28:54.453315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.642 qpair failed and we were unable to recover it. 00:24:28.642 [2024-07-12 11:28:54.453399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.642 [2024-07-12 11:28:54.453426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.642 qpair failed and we were unable to recover it. 00:24:28.642 [2024-07-12 11:28:54.453514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.642 [2024-07-12 11:28:54.453542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.642 qpair failed and we were unable to recover it. 00:24:28.642 [2024-07-12 11:28:54.453664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.642 [2024-07-12 11:28:54.453692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.642 qpair failed and we were unable to recover it. 00:24:28.642 [2024-07-12 11:28:54.453813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.642 [2024-07-12 11:28:54.453841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.642 qpair failed and we were unable to recover it. 00:24:28.642 [2024-07-12 11:28:54.453985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.642 [2024-07-12 11:28:54.454014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.642 qpair failed and we were unable to recover it. 00:24:28.642 [2024-07-12 11:28:54.454111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.642 [2024-07-12 11:28:54.454138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.642 qpair failed and we were unable to recover it. 00:24:28.642 [2024-07-12 11:28:54.454248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.642 [2024-07-12 11:28:54.454275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.642 qpair failed and we were unable to recover it. 00:24:28.642 [2024-07-12 11:28:54.454393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.642 [2024-07-12 11:28:54.454420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.642 qpair failed and we were unable to recover it. 00:24:28.642 [2024-07-12 11:28:54.454512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.642 [2024-07-12 11:28:54.454540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.642 qpair failed and we were unable to recover it. 00:24:28.642 [2024-07-12 11:28:54.454642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.642 [2024-07-12 11:28:54.454683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.642 qpair failed and we were unable to recover it. 00:24:28.642 [2024-07-12 11:28:54.454810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.642 [2024-07-12 11:28:54.454837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.642 qpair failed and we were unable to recover it. 00:24:28.642 [2024-07-12 11:28:54.454957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.642 [2024-07-12 11:28:54.454984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.642 qpair failed and we were unable to recover it. 00:24:28.642 [2024-07-12 11:28:54.455064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.642 [2024-07-12 11:28:54.455090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.642 qpair failed and we were unable to recover it. 00:24:28.642 [2024-07-12 11:28:54.455208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.642 [2024-07-12 11:28:54.455234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.642 qpair failed and we were unable to recover it. 00:24:28.642 [2024-07-12 11:28:54.455347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.642 [2024-07-12 11:28:54.455374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.642 qpair failed and we were unable to recover it. 00:24:28.642 [2024-07-12 11:28:54.455455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.642 [2024-07-12 11:28:54.455481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.642 qpair failed and we were unable to recover it. 00:24:28.642 [2024-07-12 11:28:54.455572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.642 [2024-07-12 11:28:54.455600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.642 qpair failed and we were unable to recover it. 00:24:28.642 [2024-07-12 11:28:54.455711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.642 [2024-07-12 11:28:54.455738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.642 qpair failed and we were unable to recover it. 00:24:28.642 [2024-07-12 11:28:54.455844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.642 [2024-07-12 11:28:54.455881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.642 qpair failed and we were unable to recover it. 00:24:28.642 [2024-07-12 11:28:54.456002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.642 [2024-07-12 11:28:54.456028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.642 qpair failed and we were unable to recover it. 00:24:28.642 [2024-07-12 11:28:54.456117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.642 [2024-07-12 11:28:54.456143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.642 qpair failed and we were unable to recover it. 00:24:28.642 [2024-07-12 11:28:54.456259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.642 [2024-07-12 11:28:54.456289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.642 qpair failed and we were unable to recover it. 00:24:28.642 [2024-07-12 11:28:54.456371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.642 [2024-07-12 11:28:54.456397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.642 qpair failed and we were unable to recover it. 00:24:28.642 [2024-07-12 11:28:54.456476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.642 [2024-07-12 11:28:54.456502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.642 qpair failed and we were unable to recover it. 00:24:28.642 [2024-07-12 11:28:54.456585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.642 [2024-07-12 11:28:54.456612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.642 qpair failed and we were unable to recover it. 00:24:28.642 [2024-07-12 11:28:54.456716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.642 [2024-07-12 11:28:54.456742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.642 qpair failed and we were unable to recover it. 00:24:28.642 [2024-07-12 11:28:54.456863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.642 [2024-07-12 11:28:54.456895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.642 qpair failed and we were unable to recover it. 00:24:28.642 [2024-07-12 11:28:54.457008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.642 [2024-07-12 11:28:54.457034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.642 qpair failed and we were unable to recover it. 00:24:28.642 [2024-07-12 11:28:54.457125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.642 [2024-07-12 11:28:54.457151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.642 qpair failed and we were unable to recover it. 00:24:28.642 [2024-07-12 11:28:54.457269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.642 [2024-07-12 11:28:54.457296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.642 qpair failed and we were unable to recover it. 00:24:28.642 [2024-07-12 11:28:54.457415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.642 [2024-07-12 11:28:54.457443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.642 qpair failed and we were unable to recover it. 00:24:28.642 [2024-07-12 11:28:54.457577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.642 [2024-07-12 11:28:54.457618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.642 qpair failed and we were unable to recover it. 00:24:28.642 [2024-07-12 11:28:54.457765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.642 [2024-07-12 11:28:54.457795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.642 qpair failed and we were unable to recover it. 00:24:28.642 [2024-07-12 11:28:54.457888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.642 [2024-07-12 11:28:54.457917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.642 qpair failed and we were unable to recover it. 00:24:28.642 [2024-07-12 11:28:54.458067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.642 [2024-07-12 11:28:54.458094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.642 qpair failed and we were unable to recover it. 00:24:28.642 [2024-07-12 11:28:54.458212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.642 [2024-07-12 11:28:54.458239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.642 qpair failed and we were unable to recover it. 00:24:28.642 [2024-07-12 11:28:54.458324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.643 [2024-07-12 11:28:54.458352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.643 qpair failed and we were unable to recover it. 00:24:28.643 [2024-07-12 11:28:54.458472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.643 [2024-07-12 11:28:54.458500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.643 qpair failed and we were unable to recover it. 00:24:28.643 [2024-07-12 11:28:54.458619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.643 [2024-07-12 11:28:54.458660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.643 qpair failed and we were unable to recover it. 00:24:28.643 [2024-07-12 11:28:54.458775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.643 [2024-07-12 11:28:54.458803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.643 qpair failed and we were unable to recover it. 00:24:28.643 [2024-07-12 11:28:54.458930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.643 [2024-07-12 11:28:54.458960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.643 qpair failed and we were unable to recover it. 00:24:28.643 [2024-07-12 11:28:54.459056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.643 [2024-07-12 11:28:54.459083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.643 qpair failed and we were unable to recover it. 00:24:28.643 [2024-07-12 11:28:54.459171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.643 [2024-07-12 11:28:54.459199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.643 qpair failed and we were unable to recover it. 00:24:28.643 [2024-07-12 11:28:54.459341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.643 [2024-07-12 11:28:54.459368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.643 qpair failed and we were unable to recover it. 00:24:28.643 [2024-07-12 11:28:54.459457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.643 [2024-07-12 11:28:54.459484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.643 qpair failed and we were unable to recover it. 00:24:28.643 [2024-07-12 11:28:54.459597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.643 [2024-07-12 11:28:54.459624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.643 qpair failed and we were unable to recover it. 00:24:28.643 [2024-07-12 11:28:54.459738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.643 [2024-07-12 11:28:54.459766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.643 qpair failed and we were unable to recover it. 00:24:28.643 [2024-07-12 11:28:54.459857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.643 [2024-07-12 11:28:54.459889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.643 qpair failed and we were unable to recover it. 00:24:28.643 [2024-07-12 11:28:54.460026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.643 [2024-07-12 11:28:54.460056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.643 qpair failed and we were unable to recover it. 00:24:28.643 [2024-07-12 11:28:54.460145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.643 [2024-07-12 11:28:54.460171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.643 qpair failed and we were unable to recover it. 00:24:28.643 [2024-07-12 11:28:54.460283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.643 [2024-07-12 11:28:54.460309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.643 qpair failed and we were unable to recover it. 00:24:28.643 [2024-07-12 11:28:54.460391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.643 [2024-07-12 11:28:54.460417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.643 qpair failed and we were unable to recover it. 00:24:28.643 [2024-07-12 11:28:54.460557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.643 [2024-07-12 11:28:54.460583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.643 qpair failed and we were unable to recover it. 00:24:28.643 [2024-07-12 11:28:54.460685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.643 [2024-07-12 11:28:54.460725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.643 qpair failed and we were unable to recover it. 00:24:28.643 [2024-07-12 11:28:54.460861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.643 [2024-07-12 11:28:54.460908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.643 qpair failed and we were unable to recover it. 00:24:28.643 [2024-07-12 11:28:54.461015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.643 [2024-07-12 11:28:54.461043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.643 qpair failed and we were unable to recover it. 00:24:28.643 [2024-07-12 11:28:54.461135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.643 [2024-07-12 11:28:54.461163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.643 qpair failed and we were unable to recover it. 00:24:28.643 [2024-07-12 11:28:54.461253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.643 [2024-07-12 11:28:54.461280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.643 qpair failed and we were unable to recover it. 00:24:28.643 [2024-07-12 11:28:54.461421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.643 [2024-07-12 11:28:54.461447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.643 qpair failed and we were unable to recover it. 00:24:28.643 [2024-07-12 11:28:54.461561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.643 [2024-07-12 11:28:54.461590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.643 qpair failed and we were unable to recover it. 00:24:28.643 [2024-07-12 11:28:54.461682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.643 [2024-07-12 11:28:54.461713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.643 qpair failed and we were unable to recover it. 00:24:28.643 [2024-07-12 11:28:54.461838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.643 [2024-07-12 11:28:54.461880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.643 qpair failed and we were unable to recover it. 00:24:28.643 [2024-07-12 11:28:54.462010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.643 [2024-07-12 11:28:54.462038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.643 qpair failed and we were unable to recover it. 00:24:28.643 [2024-07-12 11:28:54.462124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.643 [2024-07-12 11:28:54.462150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.643 qpair failed and we were unable to recover it. 00:24:28.643 [2024-07-12 11:28:54.462238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.643 [2024-07-12 11:28:54.462265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.643 qpair failed and we were unable to recover it. 00:24:28.643 [2024-07-12 11:28:54.462346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.643 [2024-07-12 11:28:54.462372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.643 qpair failed and we were unable to recover it. 00:24:28.643 [2024-07-12 11:28:54.462453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.643 [2024-07-12 11:28:54.462482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.643 qpair failed and we were unable to recover it. 00:24:28.643 [2024-07-12 11:28:54.462580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.643 [2024-07-12 11:28:54.462621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.643 qpair failed and we were unable to recover it. 00:24:28.643 [2024-07-12 11:28:54.462703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.643 [2024-07-12 11:28:54.462731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.643 qpair failed and we were unable to recover it. 00:24:28.643 [2024-07-12 11:28:54.462882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.643 [2024-07-12 11:28:54.462910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.643 qpair failed and we were unable to recover it. 00:24:28.643 [2024-07-12 11:28:54.462991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.643 [2024-07-12 11:28:54.463019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.643 qpair failed and we were unable to recover it. 00:24:28.643 [2024-07-12 11:28:54.463132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.643 [2024-07-12 11:28:54.463159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.643 qpair failed and we were unable to recover it. 00:24:28.643 [2024-07-12 11:28:54.463250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.643 [2024-07-12 11:28:54.463276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.643 qpair failed and we were unable to recover it. 00:24:28.643 [2024-07-12 11:28:54.463386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.643 [2024-07-12 11:28:54.463414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.643 qpair failed and we were unable to recover it. 00:24:28.643 [2024-07-12 11:28:54.463535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.643 [2024-07-12 11:28:54.463565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.643 qpair failed and we were unable to recover it. 00:24:28.643 [2024-07-12 11:28:54.463698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.643 [2024-07-12 11:28:54.463727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.643 qpair failed and we were unable to recover it. 00:24:28.643 [2024-07-12 11:28:54.463827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.643 [2024-07-12 11:28:54.463876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.643 qpair failed and we were unable to recover it. 00:24:28.644 [2024-07-12 11:28:54.464002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.644 [2024-07-12 11:28:54.464030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.644 qpair failed and we were unable to recover it. 00:24:28.644 [2024-07-12 11:28:54.464118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.644 [2024-07-12 11:28:54.464145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.644 qpair failed and we were unable to recover it. 00:24:28.644 [2024-07-12 11:28:54.464235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.644 [2024-07-12 11:28:54.464261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.644 qpair failed and we were unable to recover it. 00:24:28.644 [2024-07-12 11:28:54.464375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.644 [2024-07-12 11:28:54.464402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.644 qpair failed and we were unable to recover it. 00:24:28.644 [2024-07-12 11:28:54.464523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.644 [2024-07-12 11:28:54.464552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.644 qpair failed and we were unable to recover it. 00:24:28.644 [2024-07-12 11:28:54.464647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.644 [2024-07-12 11:28:54.464674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.644 qpair failed and we were unable to recover it. 00:24:28.644 [2024-07-12 11:28:54.464786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.644 [2024-07-12 11:28:54.464813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.644 qpair failed and we were unable to recover it. 00:24:28.644 [2024-07-12 11:28:54.464912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.644 [2024-07-12 11:28:54.464941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.644 qpair failed and we were unable to recover it. 00:24:28.644 [2024-07-12 11:28:54.465032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.644 [2024-07-12 11:28:54.465060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.644 qpair failed and we were unable to recover it. 00:24:28.644 [2024-07-12 11:28:54.465178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.644 [2024-07-12 11:28:54.465205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.644 qpair failed and we were unable to recover it. 00:24:28.644 [2024-07-12 11:28:54.465317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.644 [2024-07-12 11:28:54.465343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.644 qpair failed and we were unable to recover it. 00:24:28.644 [2024-07-12 11:28:54.465461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.644 [2024-07-12 11:28:54.465494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.644 qpair failed and we were unable to recover it. 00:24:28.644 [2024-07-12 11:28:54.465619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.644 [2024-07-12 11:28:54.465649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.644 qpair failed and we were unable to recover it. 00:24:28.644 [2024-07-12 11:28:54.465741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.644 [2024-07-12 11:28:54.465770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.644 qpair failed and we were unable to recover it. 00:24:28.644 [2024-07-12 11:28:54.465879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.644 [2024-07-12 11:28:54.465919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.644 qpair failed and we were unable to recover it. 00:24:28.644 [2024-07-12 11:28:54.466006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.644 [2024-07-12 11:28:54.466032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.644 qpair failed and we were unable to recover it. 00:24:28.644 [2024-07-12 11:28:54.466113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.644 [2024-07-12 11:28:54.466139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.644 qpair failed and we were unable to recover it. 00:24:28.644 [2024-07-12 11:28:54.466219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.644 [2024-07-12 11:28:54.466245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.644 qpair failed and we were unable to recover it. 00:24:28.644 [2024-07-12 11:28:54.466359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.644 [2024-07-12 11:28:54.466385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.644 qpair failed and we were unable to recover it. 00:24:28.644 [2024-07-12 11:28:54.466497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.644 [2024-07-12 11:28:54.466524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.644 qpair failed and we were unable to recover it. 00:24:28.644 [2024-07-12 11:28:54.466612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.644 [2024-07-12 11:28:54.466638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.644 qpair failed and we were unable to recover it. 00:24:28.644 [2024-07-12 11:28:54.466722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.644 [2024-07-12 11:28:54.466748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.644 qpair failed and we were unable to recover it. 00:24:28.644 [2024-07-12 11:28:54.466883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.644 [2024-07-12 11:28:54.466912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.644 qpair failed and we were unable to recover it. 00:24:28.644 [2024-07-12 11:28:54.467002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.644 [2024-07-12 11:28:54.467029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.644 qpair failed and we were unable to recover it. 00:24:28.644 [2024-07-12 11:28:54.467139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.644 [2024-07-12 11:28:54.467165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.644 qpair failed and we were unable to recover it. 00:24:28.644 [2024-07-12 11:28:54.467253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.644 [2024-07-12 11:28:54.467280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.644 qpair failed and we were unable to recover it. 00:24:28.644 [2024-07-12 11:28:54.467361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.644 [2024-07-12 11:28:54.467390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.644 qpair failed and we were unable to recover it. 00:24:28.644 [2024-07-12 11:28:54.467484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.644 [2024-07-12 11:28:54.467512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.644 qpair failed and we were unable to recover it. 00:24:28.644 [2024-07-12 11:28:54.467592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.644 [2024-07-12 11:28:54.467619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.644 qpair failed and we were unable to recover it. 00:24:28.644 [2024-07-12 11:28:54.467729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.644 [2024-07-12 11:28:54.467756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.644 qpair failed and we were unable to recover it. 00:24:28.644 [2024-07-12 11:28:54.467872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.644 [2024-07-12 11:28:54.467899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.644 qpair failed and we were unable to recover it. 00:24:28.644 [2024-07-12 11:28:54.467996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.644 [2024-07-12 11:28:54.468023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.644 qpair failed and we were unable to recover it. 00:24:28.644 [2024-07-12 11:28:54.468110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.644 [2024-07-12 11:28:54.468137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.644 qpair failed and we were unable to recover it. 00:24:28.644 [2024-07-12 11:28:54.468255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.644 [2024-07-12 11:28:54.468283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.644 qpair failed and we were unable to recover it. 00:24:28.644 [2024-07-12 11:28:54.468397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.644 [2024-07-12 11:28:54.468424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.644 qpair failed and we were unable to recover it. 00:24:28.644 [2024-07-12 11:28:54.468540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.644 [2024-07-12 11:28:54.468567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.644 qpair failed and we were unable to recover it. 00:24:28.644 [2024-07-12 11:28:54.468682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.644 [2024-07-12 11:28:54.468709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.644 qpair failed and we were unable to recover it. 00:24:28.644 [2024-07-12 11:28:54.468791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.644 [2024-07-12 11:28:54.468817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.644 qpair failed and we were unable to recover it. 00:24:28.644 [2024-07-12 11:28:54.468921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.644 [2024-07-12 11:28:54.468952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.644 qpair failed and we were unable to recover it. 00:24:28.644 [2024-07-12 11:28:54.469076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.644 [2024-07-12 11:28:54.469116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.645 qpair failed and we were unable to recover it. 00:24:28.645 [2024-07-12 11:28:54.469242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.645 [2024-07-12 11:28:54.469271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.645 qpair failed and we were unable to recover it. 00:24:28.645 [2024-07-12 11:28:54.469385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.645 [2024-07-12 11:28:54.469412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.645 qpair failed and we were unable to recover it. 00:24:28.645 [2024-07-12 11:28:54.469526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.645 [2024-07-12 11:28:54.469553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.645 qpair failed and we were unable to recover it. 00:24:28.645 [2024-07-12 11:28:54.469638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.645 [2024-07-12 11:28:54.469665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.645 qpair failed and we were unable to recover it. 00:24:28.645 [2024-07-12 11:28:54.469776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.645 [2024-07-12 11:28:54.469803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.645 qpair failed and we were unable to recover it. 00:24:28.645 [2024-07-12 11:28:54.469897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.645 [2024-07-12 11:28:54.469926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.645 qpair failed and we were unable to recover it. 00:24:28.645 [2024-07-12 11:28:54.470071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.645 [2024-07-12 11:28:54.470098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.645 qpair failed and we were unable to recover it. 00:24:28.645 [2024-07-12 11:28:54.470180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.645 [2024-07-12 11:28:54.470207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.645 qpair failed and we were unable to recover it. 00:24:28.645 [2024-07-12 11:28:54.470293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.645 [2024-07-12 11:28:54.470321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.645 qpair failed and we were unable to recover it. 00:24:28.645 [2024-07-12 11:28:54.470433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.645 [2024-07-12 11:28:54.470460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.645 qpair failed and we were unable to recover it. 00:24:28.645 [2024-07-12 11:28:54.470555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.645 [2024-07-12 11:28:54.470583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.645 qpair failed and we were unable to recover it. 00:24:28.645 [2024-07-12 11:28:54.470701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.645 [2024-07-12 11:28:54.470728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.645 qpair failed and we were unable to recover it. 00:24:28.645 [2024-07-12 11:28:54.470862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.645 [2024-07-12 11:28:54.470898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.645 qpair failed and we were unable to recover it. 00:24:28.645 [2024-07-12 11:28:54.471014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.645 [2024-07-12 11:28:54.471042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.645 qpair failed and we were unable to recover it. 00:24:28.645 [2024-07-12 11:28:54.471187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.645 [2024-07-12 11:28:54.471216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.645 qpair failed and we were unable to recover it. 00:24:28.645 [2024-07-12 11:28:54.471334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.645 [2024-07-12 11:28:54.471361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.645 qpair failed and we were unable to recover it. 00:24:28.645 [2024-07-12 11:28:54.471472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.645 [2024-07-12 11:28:54.471499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.645 qpair failed and we were unable to recover it. 00:24:28.645 [2024-07-12 11:28:54.471612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.645 [2024-07-12 11:28:54.471639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.645 qpair failed and we were unable to recover it. 00:24:28.645 [2024-07-12 11:28:54.471716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.645 [2024-07-12 11:28:54.471743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.645 qpair failed and we were unable to recover it. 00:24:28.645 [2024-07-12 11:28:54.471878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.645 [2024-07-12 11:28:54.471919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.645 qpair failed and we were unable to recover it. 00:24:28.645 [2024-07-12 11:28:54.472048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.645 [2024-07-12 11:28:54.472076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.645 qpair failed and we were unable to recover it. 00:24:28.645 [2024-07-12 11:28:54.472236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.645 [2024-07-12 11:28:54.472277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.645 qpair failed and we were unable to recover it. 00:24:28.645 [2024-07-12 11:28:54.472399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.645 [2024-07-12 11:28:54.472427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.645 qpair failed and we were unable to recover it. 00:24:28.645 [2024-07-12 11:28:54.472538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.645 [2024-07-12 11:28:54.472565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.645 qpair failed and we were unable to recover it. 00:24:28.645 [2024-07-12 11:28:54.472680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.645 [2024-07-12 11:28:54.472707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.645 qpair failed and we were unable to recover it. 00:24:28.645 [2024-07-12 11:28:54.472830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.645 [2024-07-12 11:28:54.472858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.645 qpair failed and we were unable to recover it. 00:24:28.645 [2024-07-12 11:28:54.472958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.645 [2024-07-12 11:28:54.472985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.645 qpair failed and we were unable to recover it. 00:24:28.645 [2024-07-12 11:28:54.473099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.645 [2024-07-12 11:28:54.473126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.645 qpair failed and we were unable to recover it. 00:24:28.645 [2024-07-12 11:28:54.473210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.645 [2024-07-12 11:28:54.473237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.645 qpair failed and we were unable to recover it. 00:24:28.645 [2024-07-12 11:28:54.473384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.645 [2024-07-12 11:28:54.473411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.645 qpair failed and we were unable to recover it. 00:24:28.645 [2024-07-12 11:28:54.473530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.645 [2024-07-12 11:28:54.473559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.645 qpair failed and we were unable to recover it. 00:24:28.645 [2024-07-12 11:28:54.473695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.645 [2024-07-12 11:28:54.473736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.645 qpair failed and we were unable to recover it. 00:24:28.645 [2024-07-12 11:28:54.473834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.645 [2024-07-12 11:28:54.473861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.645 qpair failed and we were unable to recover it. 00:24:28.645 [2024-07-12 11:28:54.473971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.645 [2024-07-12 11:28:54.473998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.645 qpair failed and we were unable to recover it. 00:24:28.645 [2024-07-12 11:28:54.474111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.645 [2024-07-12 11:28:54.474137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.645 qpair failed and we were unable to recover it. 00:24:28.645 [2024-07-12 11:28:54.474259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.645 [2024-07-12 11:28:54.474285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.645 qpair failed and we were unable to recover it. 00:24:28.645 [2024-07-12 11:28:54.474411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.645 [2024-07-12 11:28:54.474439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.645 qpair failed and we were unable to recover it. 00:24:28.645 [2024-07-12 11:28:54.474527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.645 [2024-07-12 11:28:54.474554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.645 qpair failed and we were unable to recover it. 00:24:28.645 [2024-07-12 11:28:54.474696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.645 [2024-07-12 11:28:54.474728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.645 qpair failed and we were unable to recover it. 00:24:28.645 [2024-07-12 11:28:54.474842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.645 [2024-07-12 11:28:54.474875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.645 qpair failed and we were unable to recover it. 00:24:28.646 [2024-07-12 11:28:54.474962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.646 [2024-07-12 11:28:54.474990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.646 qpair failed and we were unable to recover it. 00:24:28.646 [2024-07-12 11:28:54.475102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.646 [2024-07-12 11:28:54.475143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.646 qpair failed and we were unable to recover it. 00:24:28.646 [2024-07-12 11:28:54.475245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.646 [2024-07-12 11:28:54.475273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.646 qpair failed and we were unable to recover it. 00:24:28.646 [2024-07-12 11:28:54.475396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.646 [2024-07-12 11:28:54.475424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.646 qpair failed and we were unable to recover it. 00:24:28.646 [2024-07-12 11:28:54.475510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.646 [2024-07-12 11:28:54.475538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.646 qpair failed and we were unable to recover it. 00:24:28.646 [2024-07-12 11:28:54.475685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.646 [2024-07-12 11:28:54.475713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.646 qpair failed and we were unable to recover it. 00:24:28.646 [2024-07-12 11:28:54.475800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.646 [2024-07-12 11:28:54.475827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.646 qpair failed and we were unable to recover it. 00:24:28.646 [2024-07-12 11:28:54.475953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.646 [2024-07-12 11:28:54.475981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.646 qpair failed and we were unable to recover it. 00:24:28.646 [2024-07-12 11:28:54.476094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.646 [2024-07-12 11:28:54.476120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.646 qpair failed and we were unable to recover it. 00:24:28.646 [2024-07-12 11:28:54.476245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.646 [2024-07-12 11:28:54.476275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.646 qpair failed and we were unable to recover it. 00:24:28.646 [2024-07-12 11:28:54.476420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.646 [2024-07-12 11:28:54.476448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.646 qpair failed and we were unable to recover it. 00:24:28.646 [2024-07-12 11:28:54.476567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.646 [2024-07-12 11:28:54.476594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.646 qpair failed and we were unable to recover it. 00:24:28.646 [2024-07-12 11:28:54.476685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.646 [2024-07-12 11:28:54.476713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.646 qpair failed and we were unable to recover it. 00:24:28.646 [2024-07-12 11:28:54.476821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.646 [2024-07-12 11:28:54.476849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.646 qpair failed and we were unable to recover it. 00:24:28.646 [2024-07-12 11:28:54.476949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.646 [2024-07-12 11:28:54.476976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.646 qpair failed and we were unable to recover it. 00:24:28.646 [2024-07-12 11:28:54.477127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.646 [2024-07-12 11:28:54.477154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.646 qpair failed and we were unable to recover it. 00:24:28.646 [2024-07-12 11:28:54.477266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.646 [2024-07-12 11:28:54.477293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.646 qpair failed and we were unable to recover it. 00:24:28.646 [2024-07-12 11:28:54.477412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.646 [2024-07-12 11:28:54.477439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.646 qpair failed and we were unable to recover it. 00:24:28.646 [2024-07-12 11:28:54.477559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.646 [2024-07-12 11:28:54.477588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.646 qpair failed and we were unable to recover it. 00:24:28.646 [2024-07-12 11:28:54.477713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.646 [2024-07-12 11:28:54.477754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.646 qpair failed and we were unable to recover it. 00:24:28.646 [2024-07-12 11:28:54.477916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.646 [2024-07-12 11:28:54.477957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.646 qpair failed and we were unable to recover it. 00:24:28.646 [2024-07-12 11:28:54.478081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.646 [2024-07-12 11:28:54.478110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.646 qpair failed and we were unable to recover it. 00:24:28.646 [2024-07-12 11:28:54.478223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.646 [2024-07-12 11:28:54.478249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.646 qpair failed and we were unable to recover it. 00:24:28.646 [2024-07-12 11:28:54.478369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.646 [2024-07-12 11:28:54.478396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.646 qpair failed and we were unable to recover it. 00:24:28.646 [2024-07-12 11:28:54.478516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.646 [2024-07-12 11:28:54.478544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.646 qpair failed and we were unable to recover it. 00:24:28.646 [2024-07-12 11:28:54.478671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.646 [2024-07-12 11:28:54.478712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.646 qpair failed and we were unable to recover it. 00:24:28.646 [2024-07-12 11:28:54.478829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.646 [2024-07-12 11:28:54.478857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.646 qpair failed and we were unable to recover it. 00:24:28.646 [2024-07-12 11:28:54.479007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.646 [2024-07-12 11:28:54.479033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.646 qpair failed and we were unable to recover it. 00:24:28.646 [2024-07-12 11:28:54.479126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.646 [2024-07-12 11:28:54.479153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.646 qpair failed and we were unable to recover it. 00:24:28.646 [2024-07-12 11:28:54.479233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.646 [2024-07-12 11:28:54.479259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.646 qpair failed and we were unable to recover it. 00:24:28.646 [2024-07-12 11:28:54.479368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.646 [2024-07-12 11:28:54.479394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.646 qpair failed and we were unable to recover it. 00:24:28.646 [2024-07-12 11:28:54.479480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.646 [2024-07-12 11:28:54.479507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.646 qpair failed and we were unable to recover it. 00:24:28.646 [2024-07-12 11:28:54.479600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.646 [2024-07-12 11:28:54.479628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.646 qpair failed and we were unable to recover it. 00:24:28.646 [2024-07-12 11:28:54.479738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.646 [2024-07-12 11:28:54.479767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.646 qpair failed and we were unable to recover it. 00:24:28.646 [2024-07-12 11:28:54.479896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.646 [2024-07-12 11:28:54.479925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.646 qpair failed and we were unable to recover it. 00:24:28.646 [2024-07-12 11:28:54.480046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.647 [2024-07-12 11:28:54.480074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.647 qpair failed and we were unable to recover it. 00:24:28.647 [2024-07-12 11:28:54.480191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.647 [2024-07-12 11:28:54.480218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.647 qpair failed and we were unable to recover it. 00:24:28.647 [2024-07-12 11:28:54.480313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.647 [2024-07-12 11:28:54.480340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.647 qpair failed and we were unable to recover it. 00:24:28.647 [2024-07-12 11:28:54.480451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.647 [2024-07-12 11:28:54.480483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.647 qpair failed and we were unable to recover it. 00:24:28.647 [2024-07-12 11:28:54.480599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.647 [2024-07-12 11:28:54.480628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.647 qpair failed and we were unable to recover it. 00:24:28.647 [2024-07-12 11:28:54.480740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.647 [2024-07-12 11:28:54.480767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.647 qpair failed and we were unable to recover it. 00:24:28.647 [2024-07-12 11:28:54.480890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.647 [2024-07-12 11:28:54.480931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.647 qpair failed and we were unable to recover it. 00:24:28.647 [2024-07-12 11:28:54.481065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.647 [2024-07-12 11:28:54.481093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.647 qpair failed and we were unable to recover it. 00:24:28.647 [2024-07-12 11:28:54.481179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.647 [2024-07-12 11:28:54.481206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.647 qpair failed and we were unable to recover it. 00:24:28.647 [2024-07-12 11:28:54.481284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.647 [2024-07-12 11:28:54.481311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.647 qpair failed and we were unable to recover it. 00:24:28.647 [2024-07-12 11:28:54.481453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.647 [2024-07-12 11:28:54.481480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.647 qpair failed and we were unable to recover it. 00:24:28.647 [2024-07-12 11:28:54.481565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.647 [2024-07-12 11:28:54.481593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.647 qpair failed and we were unable to recover it. 00:24:28.647 [2024-07-12 11:28:54.481709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.647 [2024-07-12 11:28:54.481737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.647 qpair failed and we were unable to recover it. 00:24:28.647 [2024-07-12 11:28:54.481883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.647 [2024-07-12 11:28:54.481913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.647 qpair failed and we were unable to recover it. 00:24:28.647 [2024-07-12 11:28:54.482023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.647 [2024-07-12 11:28:54.482051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.647 qpair failed and we were unable to recover it. 00:24:28.647 [2024-07-12 11:28:54.482138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.647 [2024-07-12 11:28:54.482164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.647 qpair failed and we were unable to recover it. 00:24:28.647 [2024-07-12 11:28:54.482257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.647 [2024-07-12 11:28:54.482284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.647 qpair failed and we were unable to recover it. 00:24:28.647 [2024-07-12 11:28:54.482396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.647 [2024-07-12 11:28:54.482424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.647 qpair failed and we were unable to recover it. 00:24:28.647 [2024-07-12 11:28:54.482523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.647 [2024-07-12 11:28:54.482551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.647 qpair failed and we were unable to recover it. 00:24:28.647 [2024-07-12 11:28:54.482669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.647 [2024-07-12 11:28:54.482696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.647 qpair failed and we were unable to recover it. 00:24:28.647 [2024-07-12 11:28:54.482806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.647 [2024-07-12 11:28:54.482846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.647 qpair failed and we were unable to recover it. 00:24:28.647 [2024-07-12 11:28:54.482953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.647 [2024-07-12 11:28:54.482981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.647 qpair failed and we were unable to recover it. 00:24:28.647 [2024-07-12 11:28:54.483094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.647 [2024-07-12 11:28:54.483122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.647 qpair failed and we were unable to recover it. 00:24:28.647 [2024-07-12 11:28:54.483243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.647 [2024-07-12 11:28:54.483271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.647 qpair failed and we were unable to recover it. 00:24:28.647 [2024-07-12 11:28:54.483411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.647 [2024-07-12 11:28:54.483438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.647 qpair failed and we were unable to recover it. 00:24:28.647 [2024-07-12 11:28:54.483562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.647 [2024-07-12 11:28:54.483603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.647 qpair failed and we were unable to recover it. 00:24:28.647 [2024-07-12 11:28:54.483727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.647 [2024-07-12 11:28:54.483755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.647 qpair failed and we were unable to recover it. 00:24:28.647 [2024-07-12 11:28:54.483873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.647 [2024-07-12 11:28:54.483914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.647 qpair failed and we were unable to recover it. 00:24:28.647 [2024-07-12 11:28:54.484008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.647 [2024-07-12 11:28:54.484036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.647 qpair failed and we were unable to recover it. 00:24:28.647 [2024-07-12 11:28:54.484155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.647 [2024-07-12 11:28:54.484181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.647 qpair failed and we were unable to recover it. 00:24:28.647 [2024-07-12 11:28:54.484284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.647 [2024-07-12 11:28:54.484316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.647 qpair failed and we were unable to recover it. 00:24:28.647 [2024-07-12 11:28:54.484425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.647 [2024-07-12 11:28:54.484451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.647 qpair failed and we were unable to recover it. 00:24:28.647 [2024-07-12 11:28:54.484542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.647 [2024-07-12 11:28:54.484572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.647 qpair failed and we were unable to recover it. 00:24:28.647 [2024-07-12 11:28:54.484694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.647 [2024-07-12 11:28:54.484734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.647 qpair failed and we were unable to recover it. 00:24:28.647 [2024-07-12 11:28:54.484856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.647 [2024-07-12 11:28:54.484890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.647 qpair failed and we were unable to recover it. 00:24:28.647 [2024-07-12 11:28:54.485004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.647 [2024-07-12 11:28:54.485030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.647 qpair failed and we were unable to recover it. 00:24:28.647 [2024-07-12 11:28:54.485169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.647 [2024-07-12 11:28:54.485195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.647 qpair failed and we were unable to recover it. 00:24:28.647 [2024-07-12 11:28:54.485344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.647 [2024-07-12 11:28:54.485370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.647 qpair failed and we were unable to recover it. 00:24:28.647 [2024-07-12 11:28:54.485477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.647 [2024-07-12 11:28:54.485503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.647 qpair failed and we were unable to recover it. 00:24:28.647 [2024-07-12 11:28:54.485618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.647 [2024-07-12 11:28:54.485647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.647 qpair failed and we were unable to recover it. 00:24:28.647 [2024-07-12 11:28:54.485762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.647 [2024-07-12 11:28:54.485789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.648 qpair failed and we were unable to recover it. 00:24:28.648 [2024-07-12 11:28:54.485909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.648 [2024-07-12 11:28:54.485938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.648 qpair failed and we were unable to recover it. 00:24:28.648 [2024-07-12 11:28:54.486020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.648 [2024-07-12 11:28:54.486046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.648 qpair failed and we were unable to recover it. 00:24:28.648 [2024-07-12 11:28:54.486163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.648 [2024-07-12 11:28:54.486190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.648 qpair failed and we were unable to recover it. 00:24:28.648 [2024-07-12 11:28:54.486301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.648 [2024-07-12 11:28:54.486327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.648 qpair failed and we were unable to recover it. 00:24:28.648 [2024-07-12 11:28:54.486435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.648 [2024-07-12 11:28:54.486461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.648 qpair failed and we were unable to recover it. 00:24:28.648 [2024-07-12 11:28:54.486604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.648 [2024-07-12 11:28:54.486630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.648 qpair failed and we were unable to recover it. 00:24:28.648 [2024-07-12 11:28:54.486719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.648 [2024-07-12 11:28:54.486745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.648 qpair failed and we were unable to recover it. 00:24:28.648 [2024-07-12 11:28:54.486830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.648 [2024-07-12 11:28:54.486858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.648 qpair failed and we were unable to recover it. 00:24:28.648 [2024-07-12 11:28:54.486987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.648 [2024-07-12 11:28:54.487015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.648 qpair failed and we were unable to recover it. 00:24:28.648 [2024-07-12 11:28:54.487147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.648 [2024-07-12 11:28:54.487187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.648 qpair failed and we were unable to recover it. 00:24:28.648 [2024-07-12 11:28:54.487310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.648 [2024-07-12 11:28:54.487338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.648 qpair failed and we were unable to recover it. 00:24:28.648 [2024-07-12 11:28:54.487479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.648 [2024-07-12 11:28:54.487506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.648 qpair failed and we were unable to recover it. 00:24:28.648 [2024-07-12 11:28:54.487644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.648 [2024-07-12 11:28:54.487670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.648 qpair failed and we were unable to recover it. 00:24:28.648 [2024-07-12 11:28:54.487816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.648 [2024-07-12 11:28:54.487842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.648 qpair failed and we were unable to recover it. 00:24:28.648 [2024-07-12 11:28:54.487966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.648 [2024-07-12 11:28:54.487993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.648 qpair failed and we were unable to recover it. 00:24:28.648 [2024-07-12 11:28:54.488109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.648 [2024-07-12 11:28:54.488136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.648 qpair failed and we were unable to recover it. 00:24:28.648 [2024-07-12 11:28:54.488291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.648 [2024-07-12 11:28:54.488319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.648 qpair failed and we were unable to recover it. 00:24:28.648 [2024-07-12 11:28:54.488485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.648 [2024-07-12 11:28:54.488525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.648 qpair failed and we were unable to recover it. 00:24:28.648 [2024-07-12 11:28:54.488635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.648 [2024-07-12 11:28:54.488663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.648 qpair failed and we were unable to recover it. 00:24:28.648 [2024-07-12 11:28:54.488792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.648 [2024-07-12 11:28:54.488829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.648 qpair failed and we were unable to recover it. 00:24:28.648 [2024-07-12 11:28:54.488938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.648 [2024-07-12 11:28:54.488966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.648 qpair failed and we were unable to recover it. 00:24:28.648 [2024-07-12 11:28:54.489063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.648 [2024-07-12 11:28:54.489090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.648 qpair failed and we were unable to recover it. 00:24:28.648 [2024-07-12 11:28:54.489225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.648 [2024-07-12 11:28:54.489251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.648 qpair failed and we were unable to recover it. 00:24:28.648 [2024-07-12 11:28:54.489355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.648 [2024-07-12 11:28:54.489382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.648 qpair failed and we were unable to recover it. 00:24:28.648 [2024-07-12 11:28:54.489500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.648 [2024-07-12 11:28:54.489527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.648 qpair failed and we were unable to recover it. 00:24:28.648 [2024-07-12 11:28:54.489618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.648 [2024-07-12 11:28:54.489646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.648 qpair failed and we were unable to recover it. 00:24:28.648 [2024-07-12 11:28:54.489782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.648 [2024-07-12 11:28:54.489810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.648 qpair failed and we were unable to recover it. 00:24:28.648 [2024-07-12 11:28:54.489934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.648 [2024-07-12 11:28:54.489962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.648 qpair failed and we were unable to recover it. 00:24:28.648 [2024-07-12 11:28:54.490074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.648 [2024-07-12 11:28:54.490101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.648 qpair failed and we were unable to recover it. 00:24:28.648 [2024-07-12 11:28:54.490247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.648 [2024-07-12 11:28:54.490278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.648 qpair failed and we were unable to recover it. 00:24:28.648 [2024-07-12 11:28:54.490373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.648 [2024-07-12 11:28:54.490399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.648 qpair failed and we were unable to recover it. 00:24:28.648 [2024-07-12 11:28:54.490514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.648 [2024-07-12 11:28:54.490541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.648 qpair failed and we were unable to recover it. 00:24:28.648 [2024-07-12 11:28:54.490657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.648 [2024-07-12 11:28:54.490684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.648 qpair failed and we were unable to recover it. 00:24:28.648 [2024-07-12 11:28:54.490827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.648 [2024-07-12 11:28:54.490872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.648 qpair failed and we were unable to recover it. 00:24:28.648 [2024-07-12 11:28:54.490991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.648 [2024-07-12 11:28:54.491018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.648 qpair failed and we were unable to recover it. 00:24:28.648 [2024-07-12 11:28:54.491132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.648 [2024-07-12 11:28:54.491165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.648 qpair failed and we were unable to recover it. 00:24:28.648 [2024-07-12 11:28:54.491277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.648 [2024-07-12 11:28:54.491303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.648 qpair failed and we were unable to recover it. 00:24:28.648 [2024-07-12 11:28:54.491417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.648 [2024-07-12 11:28:54.491443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.648 qpair failed and we were unable to recover it. 00:24:28.648 [2024-07-12 11:28:54.491587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.648 [2024-07-12 11:28:54.491613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.648 qpair failed and we were unable to recover it. 00:24:28.648 [2024-07-12 11:28:54.491705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.648 [2024-07-12 11:28:54.491732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.649 qpair failed and we were unable to recover it. 00:24:28.649 [2024-07-12 11:28:54.491827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.649 [2024-07-12 11:28:54.491863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.649 qpair failed and we were unable to recover it. 00:24:28.649 [2024-07-12 11:28:54.491985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.649 [2024-07-12 11:28:54.492011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.649 qpair failed and we were unable to recover it. 00:24:28.649 [2024-07-12 11:28:54.492153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.649 [2024-07-12 11:28:54.492179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.649 qpair failed and we were unable to recover it. 00:24:28.649 [2024-07-12 11:28:54.492297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.649 [2024-07-12 11:28:54.492324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.649 qpair failed and we were unable to recover it. 00:24:28.649 [2024-07-12 11:28:54.493124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.649 [2024-07-12 11:28:54.493168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.649 qpair failed and we were unable to recover it. 00:24:28.649 [2024-07-12 11:28:54.493297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.649 [2024-07-12 11:28:54.493325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.649 qpair failed and we were unable to recover it. 00:24:28.649 [2024-07-12 11:28:54.493435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.649 [2024-07-12 11:28:54.493461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.649 qpair failed and we were unable to recover it. 00:24:28.649 [2024-07-12 11:28:54.493549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.649 [2024-07-12 11:28:54.493575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.649 qpair failed and we were unable to recover it. 00:24:28.649 [2024-07-12 11:28:54.493999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.649 [2024-07-12 11:28:54.494029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.649 qpair failed and we were unable to recover it. 00:24:28.649 [2024-07-12 11:28:54.494194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.649 [2024-07-12 11:28:54.494222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.649 qpair failed and we were unable to recover it. 00:24:28.649 [2024-07-12 11:28:54.494335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.649 [2024-07-12 11:28:54.494362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.649 qpair failed and we were unable to recover it. 00:24:28.649 [2024-07-12 11:28:54.494453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.649 [2024-07-12 11:28:54.494480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.649 qpair failed and we were unable to recover it. 00:24:28.649 [2024-07-12 11:28:54.494605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.649 [2024-07-12 11:28:54.494632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.649 qpair failed and we were unable to recover it. 00:24:28.649 [2024-07-12 11:28:54.494776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.649 [2024-07-12 11:28:54.494802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.649 qpair failed and we were unable to recover it. 00:24:28.649 [2024-07-12 11:28:54.494912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.649 [2024-07-12 11:28:54.494941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.649 qpair failed and we were unable to recover it. 00:24:28.649 [2024-07-12 11:28:54.495031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.649 [2024-07-12 11:28:54.495058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.649 qpair failed and we were unable to recover it. 00:24:28.649 [2024-07-12 11:28:54.495187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.649 [2024-07-12 11:28:54.495214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.649 qpair failed and we were unable to recover it. 00:24:28.649 [2024-07-12 11:28:54.495346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.649 [2024-07-12 11:28:54.495372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.649 qpair failed and we were unable to recover it. 00:24:28.649 [2024-07-12 11:28:54.495498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.649 [2024-07-12 11:28:54.495524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.649 qpair failed and we were unable to recover it. 00:24:28.649 [2024-07-12 11:28:54.495617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.649 [2024-07-12 11:28:54.495643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.649 qpair failed and we were unable to recover it. 00:24:28.649 [2024-07-12 11:28:54.495771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.649 [2024-07-12 11:28:54.495798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.649 qpair failed and we were unable to recover it. 00:24:28.649 [2024-07-12 11:28:54.495925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.649 [2024-07-12 11:28:54.495952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.649 qpair failed and we were unable to recover it. 00:24:28.649 [2024-07-12 11:28:54.496058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.649 [2024-07-12 11:28:54.496083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.649 qpair failed and we were unable to recover it. 00:24:28.649 [2024-07-12 11:28:54.496204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.649 [2024-07-12 11:28:54.496230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.649 qpair failed and we were unable to recover it. 00:24:28.649 [2024-07-12 11:28:54.496346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.649 [2024-07-12 11:28:54.496372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.649 qpair failed and we were unable to recover it. 00:24:28.649 [2024-07-12 11:28:54.496500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.649 [2024-07-12 11:28:54.496527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.649 qpair failed and we were unable to recover it. 00:24:28.649 [2024-07-12 11:28:54.496669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.649 [2024-07-12 11:28:54.496697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.649 qpair failed and we were unable to recover it. 00:24:28.649 [2024-07-12 11:28:54.496805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.649 [2024-07-12 11:28:54.496843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.649 qpair failed and we were unable to recover it. 00:24:28.649 [2024-07-12 11:28:54.496963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.649 [2024-07-12 11:28:54.497005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.649 qpair failed and we were unable to recover it. 00:24:28.649 [2024-07-12 11:28:54.497135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.649 [2024-07-12 11:28:54.497181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.649 qpair failed and we were unable to recover it. 00:24:28.649 [2024-07-12 11:28:54.497298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.649 [2024-07-12 11:28:54.497325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.649 qpair failed and we were unable to recover it. 00:24:28.649 [2024-07-12 11:28:54.497475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.649 [2024-07-12 11:28:54.497509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.649 qpair failed and we were unable to recover it. 00:24:28.649 [2024-07-12 11:28:54.497648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.649 [2024-07-12 11:28:54.497676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.649 qpair failed and we were unable to recover it. 00:24:28.649 [2024-07-12 11:28:54.497820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.649 [2024-07-12 11:28:54.497847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.649 qpair failed and we were unable to recover it. 00:24:28.649 [2024-07-12 11:28:54.498017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.649 [2024-07-12 11:28:54.498044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.649 qpair failed and we were unable to recover it. 00:24:28.649 [2024-07-12 11:28:54.498137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.649 [2024-07-12 11:28:54.498175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.649 qpair failed and we were unable to recover it. 00:24:28.649 [2024-07-12 11:28:54.498284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.649 [2024-07-12 11:28:54.498311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.649 qpair failed and we were unable to recover it. 00:24:28.649 [2024-07-12 11:28:54.498428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.649 [2024-07-12 11:28:54.498455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.649 qpair failed and we were unable to recover it. 00:24:28.649 [2024-07-12 11:28:54.498538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.649 [2024-07-12 11:28:54.498565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.649 qpair failed and we were unable to recover it. 00:24:28.649 [2024-07-12 11:28:54.498708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.649 [2024-07-12 11:28:54.498735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.650 qpair failed and we were unable to recover it. 00:24:28.650 [2024-07-12 11:28:54.498858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.650 [2024-07-12 11:28:54.498892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.650 qpair failed and we were unable to recover it. 00:24:28.650 [2024-07-12 11:28:54.498983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.650 [2024-07-12 11:28:54.499011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.650 qpair failed and we were unable to recover it. 00:24:28.650 [2024-07-12 11:28:54.499166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.650 [2024-07-12 11:28:54.499192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.650 qpair failed and we were unable to recover it. 00:24:28.650 [2024-07-12 11:28:54.499337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.650 [2024-07-12 11:28:54.499364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.650 qpair failed and we were unable to recover it. 00:24:28.650 [2024-07-12 11:28:54.499476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.650 [2024-07-12 11:28:54.499502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.650 qpair failed and we were unable to recover it. 00:24:28.650 [2024-07-12 11:28:54.499583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.650 [2024-07-12 11:28:54.499609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.650 qpair failed and we were unable to recover it. 00:24:28.650 [2024-07-12 11:28:54.499716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.650 [2024-07-12 11:28:54.499756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.650 qpair failed and we were unable to recover it. 00:24:28.650 [2024-07-12 11:28:54.499911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.650 [2024-07-12 11:28:54.499953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.650 qpair failed and we were unable to recover it. 00:24:28.650 [2024-07-12 11:28:54.500080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.650 [2024-07-12 11:28:54.500114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.650 qpair failed and we were unable to recover it. 00:24:28.650 [2024-07-12 11:28:54.500239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.650 [2024-07-12 11:28:54.500266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.650 qpair failed and we were unable to recover it. 00:24:28.650 [2024-07-12 11:28:54.500386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.650 [2024-07-12 11:28:54.500412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.650 qpair failed and we were unable to recover it. 00:24:28.650 [2024-07-12 11:28:54.500506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.650 [2024-07-12 11:28:54.500533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.650 qpair failed and we were unable to recover it. 00:24:28.650 [2024-07-12 11:28:54.500623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.650 [2024-07-12 11:28:54.500652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.650 qpair failed and we were unable to recover it. 00:24:28.650 [2024-07-12 11:28:54.500792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.650 [2024-07-12 11:28:54.500831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.650 qpair failed and we were unable to recover it. 00:24:28.650 [2024-07-12 11:28:54.500949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.650 [2024-07-12 11:28:54.500989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.650 qpair failed and we were unable to recover it. 00:24:28.650 [2024-07-12 11:28:54.501088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.650 [2024-07-12 11:28:54.501116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.650 qpair failed and we were unable to recover it. 00:24:28.650 [2024-07-12 11:28:54.501239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.650 [2024-07-12 11:28:54.501271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.650 qpair failed and we were unable to recover it. 00:24:28.650 [2024-07-12 11:28:54.501387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.650 [2024-07-12 11:28:54.501414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.650 qpair failed and we were unable to recover it. 00:24:28.650 [2024-07-12 11:28:54.501541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.650 [2024-07-12 11:28:54.501569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.650 qpair failed and we were unable to recover it. 00:24:28.650 [2024-07-12 11:28:54.501666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.650 [2024-07-12 11:28:54.501692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.650 qpair failed and we were unable to recover it. 00:24:28.650 [2024-07-12 11:28:54.501817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.650 [2024-07-12 11:28:54.501847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.650 qpair failed and we were unable to recover it. 00:24:28.650 [2024-07-12 11:28:54.501972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.650 [2024-07-12 11:28:54.501999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.650 qpair failed and we were unable to recover it. 00:24:28.650 [2024-07-12 11:28:54.502144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.650 [2024-07-12 11:28:54.502181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.650 qpair failed and we were unable to recover it. 00:24:28.650 [2024-07-12 11:28:54.502277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.650 [2024-07-12 11:28:54.502304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.650 qpair failed and we were unable to recover it. 00:24:28.650 [2024-07-12 11:28:54.502444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.650 [2024-07-12 11:28:54.502470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.650 qpair failed and we were unable to recover it. 00:24:28.650 [2024-07-12 11:28:54.502568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.650 [2024-07-12 11:28:54.502594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.650 qpair failed and we were unable to recover it. 00:24:28.650 [2024-07-12 11:28:54.502711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.650 [2024-07-12 11:28:54.502738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.650 qpair failed and we were unable to recover it. 00:24:28.650 [2024-07-12 11:28:54.502860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.650 [2024-07-12 11:28:54.502895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.650 qpair failed and we were unable to recover it. 00:24:28.650 [2024-07-12 11:28:54.502981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.650 [2024-07-12 11:28:54.503007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.650 qpair failed and we were unable to recover it. 00:24:28.650 [2024-07-12 11:28:54.503149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.650 [2024-07-12 11:28:54.503182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.650 qpair failed and we were unable to recover it. 00:24:28.650 [2024-07-12 11:28:54.503303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.650 [2024-07-12 11:28:54.503330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.650 qpair failed and we were unable to recover it. 00:24:28.650 [2024-07-12 11:28:54.503475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.650 [2024-07-12 11:28:54.503502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.650 qpair failed and we were unable to recover it. 00:24:28.650 [2024-07-12 11:28:54.503592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.650 [2024-07-12 11:28:54.503619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.650 qpair failed and we were unable to recover it. 00:24:28.650 [2024-07-12 11:28:54.503735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.650 [2024-07-12 11:28:54.503762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.650 qpair failed and we were unable to recover it. 00:24:28.651 [2024-07-12 11:28:54.503889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.651 [2024-07-12 11:28:54.503929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.651 qpair failed and we were unable to recover it. 00:24:28.651 [2024-07-12 11:28:54.504034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.651 [2024-07-12 11:28:54.504076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.651 qpair failed and we were unable to recover it. 00:24:28.651 [2024-07-12 11:28:54.504196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.651 [2024-07-12 11:28:54.504233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.651 qpair failed and we were unable to recover it. 00:24:28.651 [2024-07-12 11:28:54.504355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.651 [2024-07-12 11:28:54.504382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.651 qpair failed and we were unable to recover it. 00:24:28.651 [2024-07-12 11:28:54.504507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.651 [2024-07-12 11:28:54.504535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.651 qpair failed and we were unable to recover it. 00:24:28.651 [2024-07-12 11:28:54.504682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.651 [2024-07-12 11:28:54.504709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.651 qpair failed and we were unable to recover it. 00:24:28.651 [2024-07-12 11:28:54.504832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.651 [2024-07-12 11:28:54.504928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.651 qpair failed and we were unable to recover it. 00:24:28.651 [2024-07-12 11:28:54.505052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.651 [2024-07-12 11:28:54.505078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.651 qpair failed and we were unable to recover it. 00:24:28.651 [2024-07-12 11:28:54.505207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.651 [2024-07-12 11:28:54.505233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.651 qpair failed and we were unable to recover it. 00:24:28.651 [2024-07-12 11:28:54.505343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.651 [2024-07-12 11:28:54.505374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.651 qpair failed and we were unable to recover it. 00:24:28.651 [2024-07-12 11:28:54.505516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.651 [2024-07-12 11:28:54.505542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.651 qpair failed and we were unable to recover it. 00:24:28.651 [2024-07-12 11:28:54.505636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.651 [2024-07-12 11:28:54.505662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.651 qpair failed and we were unable to recover it. 00:24:28.651 [2024-07-12 11:28:54.505747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.651 [2024-07-12 11:28:54.505773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.651 qpair failed and we were unable to recover it. 00:24:28.651 [2024-07-12 11:28:54.505930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.651 [2024-07-12 11:28:54.505971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.651 qpair failed and we were unable to recover it. 00:24:28.651 [2024-07-12 11:28:54.506067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.651 [2024-07-12 11:28:54.506096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.651 qpair failed and we were unable to recover it. 00:24:28.651 [2024-07-12 11:28:54.506240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.651 [2024-07-12 11:28:54.506267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.651 qpair failed and we were unable to recover it. 00:24:28.651 [2024-07-12 11:28:54.506375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.651 [2024-07-12 11:28:54.506402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.651 qpair failed and we were unable to recover it. 00:24:28.651 [2024-07-12 11:28:54.506517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.651 [2024-07-12 11:28:54.506545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.651 qpair failed and we were unable to recover it. 00:24:28.651 [2024-07-12 11:28:54.506631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.651 [2024-07-12 11:28:54.506657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.651 qpair failed and we were unable to recover it. 00:24:28.651 [2024-07-12 11:28:54.506745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.651 [2024-07-12 11:28:54.506772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.651 qpair failed and we were unable to recover it. 00:24:28.651 [2024-07-12 11:28:54.506931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.651 [2024-07-12 11:28:54.506958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.651 qpair failed and we were unable to recover it. 00:24:28.651 [2024-07-12 11:28:54.507075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.651 [2024-07-12 11:28:54.507102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.651 qpair failed and we were unable to recover it. 00:24:28.651 [2024-07-12 11:28:54.507195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.651 [2024-07-12 11:28:54.507222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.651 qpair failed and we were unable to recover it. 00:24:28.651 [2024-07-12 11:28:54.507344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.651 [2024-07-12 11:28:54.507371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.651 qpair failed and we were unable to recover it. 00:24:28.651 [2024-07-12 11:28:54.507493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.651 [2024-07-12 11:28:54.507521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.651 qpair failed and we were unable to recover it. 00:24:28.651 [2024-07-12 11:28:54.507618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.651 [2024-07-12 11:28:54.507646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.651 qpair failed and we were unable to recover it. 00:24:28.651 [2024-07-12 11:28:54.507734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.651 [2024-07-12 11:28:54.507760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.651 qpair failed and we were unable to recover it. 00:24:28.651 [2024-07-12 11:28:54.507877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.651 [2024-07-12 11:28:54.507904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.651 qpair failed and we were unable to recover it. 00:24:28.651 [2024-07-12 11:28:54.508018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.651 [2024-07-12 11:28:54.508044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.651 qpair failed and we were unable to recover it. 00:24:28.651 [2024-07-12 11:28:54.508167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.651 [2024-07-12 11:28:54.508193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.651 qpair failed and we were unable to recover it. 00:24:28.651 [2024-07-12 11:28:54.508272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.651 [2024-07-12 11:28:54.508298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.651 qpair failed and we were unable to recover it. 00:24:28.651 [2024-07-12 11:28:54.508390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.651 [2024-07-12 11:28:54.508419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.651 qpair failed and we were unable to recover it. 00:24:28.651 [2024-07-12 11:28:54.508538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.651 [2024-07-12 11:28:54.508566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.651 qpair failed and we were unable to recover it. 00:24:28.651 [2024-07-12 11:28:54.508708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.651 [2024-07-12 11:28:54.508735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.651 qpair failed and we were unable to recover it. 00:24:28.651 [2024-07-12 11:28:54.508851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.651 [2024-07-12 11:28:54.508895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.651 qpair failed and we were unable to recover it. 00:24:28.651 [2024-07-12 11:28:54.509007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.651 [2024-07-12 11:28:54.509034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.651 qpair failed and we were unable to recover it. 00:24:28.651 [2024-07-12 11:28:54.509160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.651 [2024-07-12 11:28:54.509207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.651 qpair failed and we were unable to recover it. 00:24:28.651 [2024-07-12 11:28:54.509335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.651 [2024-07-12 11:28:54.509363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.651 qpair failed and we were unable to recover it. 00:24:28.651 [2024-07-12 11:28:54.509479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.651 [2024-07-12 11:28:54.509506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.651 qpair failed and we were unable to recover it. 00:24:28.651 [2024-07-12 11:28:54.509593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.651 [2024-07-12 11:28:54.509619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.651 qpair failed and we were unable to recover it. 00:24:28.651 [2024-07-12 11:28:54.509728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.652 [2024-07-12 11:28:54.509755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.652 qpair failed and we were unable to recover it. 00:24:28.652 [2024-07-12 11:28:54.509898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.652 [2024-07-12 11:28:54.509925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.652 qpair failed and we were unable to recover it. 00:24:28.652 [2024-07-12 11:28:54.510008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.652 [2024-07-12 11:28:54.510034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.652 qpair failed and we were unable to recover it. 00:24:28.652 [2024-07-12 11:28:54.510142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.652 [2024-07-12 11:28:54.510180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.652 qpair failed and we were unable to recover it. 00:24:28.652 [2024-07-12 11:28:54.510288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.652 [2024-07-12 11:28:54.510315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.652 qpair failed and we were unable to recover it. 00:24:28.652 [2024-07-12 11:28:54.510432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.652 [2024-07-12 11:28:54.510460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.652 qpair failed and we were unable to recover it. 00:24:28.652 [2024-07-12 11:28:54.510541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.652 [2024-07-12 11:28:54.510571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.652 qpair failed and we were unable to recover it. 00:24:28.652 [2024-07-12 11:28:54.510686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.652 [2024-07-12 11:28:54.510714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.652 qpair failed and we were unable to recover it. 00:24:28.652 [2024-07-12 11:28:54.510795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.652 [2024-07-12 11:28:54.510823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.652 qpair failed and we were unable to recover it. 00:24:28.652 [2024-07-12 11:28:54.510920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.652 [2024-07-12 11:28:54.510948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.652 qpair failed and we were unable to recover it. 00:24:28.652 [2024-07-12 11:28:54.511042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.652 [2024-07-12 11:28:54.511070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.652 qpair failed and we were unable to recover it. 00:24:28.652 [2024-07-12 11:28:54.511149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.652 [2024-07-12 11:28:54.511184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.652 qpair failed and we were unable to recover it. 00:24:28.652 [2024-07-12 11:28:54.511299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.652 [2024-07-12 11:28:54.511325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.652 qpair failed and we were unable to recover it. 00:24:28.652 [2024-07-12 11:28:54.511440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.652 [2024-07-12 11:28:54.511466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.652 qpair failed and we were unable to recover it. 00:24:28.652 [2024-07-12 11:28:54.511601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.652 [2024-07-12 11:28:54.511627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.652 qpair failed and we were unable to recover it. 00:24:28.652 [2024-07-12 11:28:54.511717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.652 [2024-07-12 11:28:54.511744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.652 qpair failed and we were unable to recover it. 00:24:28.652 [2024-07-12 11:28:54.511878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.652 [2024-07-12 11:28:54.511918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.652 qpair failed and we were unable to recover it. 00:24:28.652 [2024-07-12 11:28:54.512019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.652 [2024-07-12 11:28:54.512048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.652 qpair failed and we were unable to recover it. 00:24:28.652 [2024-07-12 11:28:54.512140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.652 [2024-07-12 11:28:54.512173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.652 qpair failed and we were unable to recover it. 00:24:28.652 [2024-07-12 11:28:54.512296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.652 [2024-07-12 11:28:54.512323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.652 qpair failed and we were unable to recover it. 00:24:28.652 [2024-07-12 11:28:54.512440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.652 [2024-07-12 11:28:54.512467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.652 qpair failed and we were unable to recover it. 00:24:28.652 [2024-07-12 11:28:54.512559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.652 [2024-07-12 11:28:54.512588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.652 qpair failed and we were unable to recover it. 00:24:28.652 [2024-07-12 11:28:54.512685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.652 [2024-07-12 11:28:54.512713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.652 qpair failed and we were unable to recover it. 00:24:28.652 [2024-07-12 11:28:54.512831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.652 [2024-07-12 11:28:54.512882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.652 qpair failed and we were unable to recover it. 00:24:28.652 [2024-07-12 11:28:54.512962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.652 [2024-07-12 11:28:54.512988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.652 qpair failed and we were unable to recover it. 00:24:28.652 [2024-07-12 11:28:54.513075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.652 [2024-07-12 11:28:54.513102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.652 qpair failed and we were unable to recover it. 00:24:28.652 [2024-07-12 11:28:54.513238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.652 [2024-07-12 11:28:54.513265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.652 qpair failed and we were unable to recover it. 00:24:28.652 [2024-07-12 11:28:54.513381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.652 [2024-07-12 11:28:54.513409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.652 qpair failed and we were unable to recover it. 00:24:28.652 [2024-07-12 11:28:54.513501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.652 [2024-07-12 11:28:54.513528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.652 qpair failed and we were unable to recover it. 00:24:28.652 [2024-07-12 11:28:54.513637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.652 [2024-07-12 11:28:54.513665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.652 qpair failed and we were unable to recover it. 00:24:28.652 [2024-07-12 11:28:54.513752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.652 [2024-07-12 11:28:54.513779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.652 qpair failed and we were unable to recover it. 00:24:28.652 [2024-07-12 11:28:54.513894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.652 [2024-07-12 11:28:54.513924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.652 qpair failed and we were unable to recover it. 00:24:28.652 [2024-07-12 11:28:54.514039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.652 [2024-07-12 11:28:54.514066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.652 qpair failed and we were unable to recover it. 00:24:28.652 [2024-07-12 11:28:54.514185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.652 [2024-07-12 11:28:54.514212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.652 qpair failed and we were unable to recover it. 00:24:28.652 [2024-07-12 11:28:54.514329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.652 [2024-07-12 11:28:54.514356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.652 qpair failed and we were unable to recover it. 00:24:28.652 [2024-07-12 11:28:54.514449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.652 [2024-07-12 11:28:54.514476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.652 qpair failed and we were unable to recover it. 00:24:28.652 [2024-07-12 11:28:54.514565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.652 [2024-07-12 11:28:54.514598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.652 qpair failed and we were unable to recover it. 00:24:28.652 [2024-07-12 11:28:54.514738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.652 [2024-07-12 11:28:54.514765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.652 qpair failed and we were unable to recover it. 00:24:28.652 [2024-07-12 11:28:54.514943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.652 [2024-07-12 11:28:54.514983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.652 qpair failed and we were unable to recover it. 00:24:28.652 [2024-07-12 11:28:54.515110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.652 [2024-07-12 11:28:54.515137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.652 qpair failed and we were unable to recover it. 00:24:28.652 [2024-07-12 11:28:54.515240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.652 [2024-07-12 11:28:54.515266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.653 qpair failed and we were unable to recover it. 00:24:28.653 [2024-07-12 11:28:54.515381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.653 [2024-07-12 11:28:54.515408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.653 qpair failed and we were unable to recover it. 00:24:28.653 [2024-07-12 11:28:54.515524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.653 [2024-07-12 11:28:54.515551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.653 qpair failed and we were unable to recover it. 00:24:28.653 [2024-07-12 11:28:54.515678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.653 [2024-07-12 11:28:54.515720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.653 qpair failed and we were unable to recover it. 00:24:28.653 [2024-07-12 11:28:54.515851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.653 [2024-07-12 11:28:54.515902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.653 qpair failed and we were unable to recover it. 00:24:28.653 [2024-07-12 11:28:54.516003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.653 [2024-07-12 11:28:54.516032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.653 qpair failed and we were unable to recover it. 00:24:28.653 [2024-07-12 11:28:54.516164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.653 [2024-07-12 11:28:54.516191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.653 qpair failed and we were unable to recover it. 00:24:28.653 [2024-07-12 11:28:54.516307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.653 [2024-07-12 11:28:54.516334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.653 qpair failed and we were unable to recover it. 00:24:28.653 [2024-07-12 11:28:54.516447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.653 [2024-07-12 11:28:54.516473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.653 qpair failed and we were unable to recover it. 00:24:28.653 [2024-07-12 11:28:54.516617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.653 [2024-07-12 11:28:54.516643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.653 qpair failed and we were unable to recover it. 00:24:28.653 [2024-07-12 11:28:54.516767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.653 [2024-07-12 11:28:54.516794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.653 qpair failed and we were unable to recover it. 00:24:28.653 [2024-07-12 11:28:54.516931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.653 [2024-07-12 11:28:54.516972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.653 qpair failed and we were unable to recover it. 00:24:28.653 [2024-07-12 11:28:54.517091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.653 [2024-07-12 11:28:54.517120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.653 qpair failed and we were unable to recover it. 00:24:28.653 [2024-07-12 11:28:54.517241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.653 [2024-07-12 11:28:54.517268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.653 qpair failed and we were unable to recover it. 00:24:28.653 [2024-07-12 11:28:54.517353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.653 [2024-07-12 11:28:54.517380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.653 qpair failed and we were unable to recover it. 00:24:28.653 [2024-07-12 11:28:54.517462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.653 [2024-07-12 11:28:54.517489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.653 qpair failed and we were unable to recover it. 00:24:28.653 [2024-07-12 11:28:54.517609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.653 [2024-07-12 11:28:54.517636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.653 qpair failed and we were unable to recover it. 00:24:28.653 [2024-07-12 11:28:54.517726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.653 [2024-07-12 11:28:54.517754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.653 qpair failed and we were unable to recover it. 00:24:28.653 [2024-07-12 11:28:54.517890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.653 [2024-07-12 11:28:54.517931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.653 qpair failed and we were unable to recover it. 00:24:28.653 [2024-07-12 11:28:54.518089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.653 [2024-07-12 11:28:54.518130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.653 qpair failed and we were unable to recover it. 00:24:28.653 [2024-07-12 11:28:54.518221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.653 [2024-07-12 11:28:54.518250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.653 qpair failed and we were unable to recover it. 00:24:28.653 [2024-07-12 11:28:54.518356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.653 [2024-07-12 11:28:54.518384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.653 qpair failed and we were unable to recover it. 00:24:28.653 [2024-07-12 11:28:54.518461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.653 [2024-07-12 11:28:54.518488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.653 qpair failed and we were unable to recover it. 00:24:28.653 [2024-07-12 11:28:54.518571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.653 [2024-07-12 11:28:54.518603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.653 qpair failed and we were unable to recover it. 00:24:28.653 [2024-07-12 11:28:54.518690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.653 [2024-07-12 11:28:54.518717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.653 qpair failed and we were unable to recover it. 00:24:28.653 [2024-07-12 11:28:54.518840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.653 [2024-07-12 11:28:54.518877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.653 qpair failed and we were unable to recover it. 00:24:28.653 [2024-07-12 11:28:54.518974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.653 [2024-07-12 11:28:54.519001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.653 qpair failed and we were unable to recover it. 00:24:28.653 [2024-07-12 11:28:54.519085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.653 [2024-07-12 11:28:54.519112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.653 qpair failed and we were unable to recover it. 00:24:28.653 [2024-07-12 11:28:54.519232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.653 [2024-07-12 11:28:54.519258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.653 qpair failed and we were unable to recover it. 00:24:28.653 [2024-07-12 11:28:54.519341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.653 [2024-07-12 11:28:54.519367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.653 qpair failed and we were unable to recover it. 00:24:28.653 [2024-07-12 11:28:54.519475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.653 [2024-07-12 11:28:54.519501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.653 qpair failed and we were unable to recover it. 00:24:28.653 [2024-07-12 11:28:54.519641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.653 [2024-07-12 11:28:54.519668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.653 qpair failed and we were unable to recover it. 00:24:28.653 [2024-07-12 11:28:54.519792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.653 [2024-07-12 11:28:54.519821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.653 qpair failed and we were unable to recover it. 00:24:28.653 [2024-07-12 11:28:54.519922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.653 [2024-07-12 11:28:54.519951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.653 qpair failed and we were unable to recover it. 00:24:28.653 [2024-07-12 11:28:54.520046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.653 [2024-07-12 11:28:54.520075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.653 qpair failed and we were unable to recover it. 00:24:28.653 [2024-07-12 11:28:54.520185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.653 [2024-07-12 11:28:54.520212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.653 qpair failed and we were unable to recover it. 00:24:28.653 [2024-07-12 11:28:54.520328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.653 [2024-07-12 11:28:54.520355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.653 qpair failed and we were unable to recover it. 00:24:28.653 [2024-07-12 11:28:54.520473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.653 [2024-07-12 11:28:54.520499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.653 qpair failed and we were unable to recover it. 00:24:28.653 [2024-07-12 11:28:54.520583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.653 [2024-07-12 11:28:54.520609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.653 qpair failed and we were unable to recover it. 00:24:28.653 [2024-07-12 11:28:54.520715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.653 [2024-07-12 11:28:54.520741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.653 qpair failed and we were unable to recover it. 00:24:28.653 [2024-07-12 11:28:54.520822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.653 [2024-07-12 11:28:54.520848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.653 qpair failed and we were unable to recover it. 00:24:28.653 [2024-07-12 11:28:54.521004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.654 [2024-07-12 11:28:54.521030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.654 qpair failed and we were unable to recover it. 00:24:28.654 [2024-07-12 11:28:54.521127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.654 [2024-07-12 11:28:54.521157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.654 qpair failed and we were unable to recover it. 00:24:28.654 [2024-07-12 11:28:54.521247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.654 [2024-07-12 11:28:54.521274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.654 qpair failed and we were unable to recover it. 00:24:28.654 [2024-07-12 11:28:54.521359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.654 [2024-07-12 11:28:54.521385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.654 qpair failed and we were unable to recover it. 00:24:28.654 [2024-07-12 11:28:54.521496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.654 [2024-07-12 11:28:54.521523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.654 qpair failed and we were unable to recover it. 00:24:28.654 [2024-07-12 11:28:54.521641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.654 [2024-07-12 11:28:54.521668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.654 qpair failed and we were unable to recover it. 00:24:28.654 [2024-07-12 11:28:54.521762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.654 [2024-07-12 11:28:54.521802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.654 qpair failed and we were unable to recover it. 00:24:28.654 [2024-07-12 11:28:54.521896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.654 [2024-07-12 11:28:54.521925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.654 qpair failed and we were unable to recover it. 00:24:28.654 [2024-07-12 11:28:54.522053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.654 [2024-07-12 11:28:54.522093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.654 qpair failed and we were unable to recover it. 00:24:28.654 [2024-07-12 11:28:54.522218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.654 [2024-07-12 11:28:54.522246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.654 qpair failed and we were unable to recover it. 00:24:28.654 [2024-07-12 11:28:54.522359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.654 [2024-07-12 11:28:54.522385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.654 qpair failed and we were unable to recover it. 00:24:28.654 [2024-07-12 11:28:54.522478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.654 [2024-07-12 11:28:54.522505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.654 qpair failed and we were unable to recover it. 00:24:28.654 [2024-07-12 11:28:54.522623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.654 [2024-07-12 11:28:54.522650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.654 qpair failed and we were unable to recover it. 00:24:28.654 [2024-07-12 11:28:54.522780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.654 [2024-07-12 11:28:54.522821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.654 qpair failed and we were unable to recover it. 00:24:28.654 [2024-07-12 11:28:54.522925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.654 [2024-07-12 11:28:54.522953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.654 qpair failed and we were unable to recover it. 00:24:28.654 [2024-07-12 11:28:54.523069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.654 [2024-07-12 11:28:54.523096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.654 qpair failed and we were unable to recover it. 00:24:28.654 [2024-07-12 11:28:54.523193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.654 [2024-07-12 11:28:54.523220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.654 qpair failed and we were unable to recover it. 00:24:28.654 [2024-07-12 11:28:54.523329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.654 [2024-07-12 11:28:54.523356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.654 qpair failed and we were unable to recover it. 00:24:28.654 [2024-07-12 11:28:54.523462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.654 [2024-07-12 11:28:54.523490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.654 qpair failed and we were unable to recover it. 00:24:28.654 [2024-07-12 11:28:54.523610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.654 [2024-07-12 11:28:54.523638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.654 qpair failed and we were unable to recover it. 00:24:28.654 [2024-07-12 11:28:54.523753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.654 [2024-07-12 11:28:54.523780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.654 qpair failed and we were unable to recover it. 00:24:28.654 [2024-07-12 11:28:54.523908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.654 [2024-07-12 11:28:54.523935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.654 qpair failed and we were unable to recover it. 00:24:28.654 [2024-07-12 11:28:54.524020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.654 [2024-07-12 11:28:54.524051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.654 qpair failed and we were unable to recover it. 00:24:28.654 [2024-07-12 11:28:54.524194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.654 [2024-07-12 11:28:54.524222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.654 qpair failed and we were unable to recover it. 00:24:28.654 [2024-07-12 11:28:54.524362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.654 [2024-07-12 11:28:54.524389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.654 qpair failed and we were unable to recover it. 00:24:28.654 [2024-07-12 11:28:54.524492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.654 [2024-07-12 11:28:54.524521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.654 qpair failed and we were unable to recover it. 00:24:28.654 [2024-07-12 11:28:54.524638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.654 [2024-07-12 11:28:54.524664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.654 qpair failed and we were unable to recover it. 00:24:28.654 [2024-07-12 11:28:54.524790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.654 [2024-07-12 11:28:54.524832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.654 qpair failed and we were unable to recover it. 00:24:28.654 [2024-07-12 11:28:54.524945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.654 [2024-07-12 11:28:54.524973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.654 qpair failed and we were unable to recover it. 00:24:28.654 [2024-07-12 11:28:54.525097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.654 [2024-07-12 11:28:54.525124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.654 qpair failed and we were unable to recover it. 00:24:28.654 [2024-07-12 11:28:54.525259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.654 [2024-07-12 11:28:54.525286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.654 qpair failed and we were unable to recover it. 00:24:28.654 [2024-07-12 11:28:54.525369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.654 [2024-07-12 11:28:54.525394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.654 qpair failed and we were unable to recover it. 00:24:28.654 [2024-07-12 11:28:54.525487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.654 [2024-07-12 11:28:54.525514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.654 qpair failed and we were unable to recover it. 00:24:28.654 [2024-07-12 11:28:54.525596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.654 [2024-07-12 11:28:54.525623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.654 qpair failed and we were unable to recover it. 00:24:28.654 [2024-07-12 11:28:54.525714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.654 [2024-07-12 11:28:54.525740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.654 qpair failed and we were unable to recover it. 00:24:28.655 [2024-07-12 11:28:54.525833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.655 [2024-07-12 11:28:54.525881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.655 qpair failed and we were unable to recover it. 00:24:28.655 [2024-07-12 11:28:54.526037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.655 [2024-07-12 11:28:54.526066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.655 qpair failed and we were unable to recover it. 00:24:28.655 [2024-07-12 11:28:54.526197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.655 [2024-07-12 11:28:54.526237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.655 qpair failed and we were unable to recover it. 00:24:28.655 [2024-07-12 11:28:54.526324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.655 [2024-07-12 11:28:54.526351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.655 qpair failed and we were unable to recover it. 00:24:28.655 [2024-07-12 11:28:54.526470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.655 [2024-07-12 11:28:54.526496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.655 qpair failed and we were unable to recover it. 00:24:28.655 [2024-07-12 11:28:54.526615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.655 [2024-07-12 11:28:54.526642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.655 qpair failed and we were unable to recover it. 00:24:28.655 [2024-07-12 11:28:54.526754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.655 [2024-07-12 11:28:54.526782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.655 qpair failed and we were unable to recover it. 00:24:28.655 [2024-07-12 11:28:54.526903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.655 [2024-07-12 11:28:54.526932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.655 qpair failed and we were unable to recover it. 00:24:28.655 [2024-07-12 11:28:54.527027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.655 [2024-07-12 11:28:54.527053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.655 qpair failed and we were unable to recover it. 00:24:28.655 [2024-07-12 11:28:54.527197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.655 [2024-07-12 11:28:54.527223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.655 qpair failed and we were unable to recover it. 00:24:28.655 [2024-07-12 11:28:54.527350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.655 [2024-07-12 11:28:54.527378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.655 qpair failed and we were unable to recover it. 00:24:28.655 [2024-07-12 11:28:54.527491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.655 [2024-07-12 11:28:54.527518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.655 qpair failed and we were unable to recover it. 00:24:28.655 [2024-07-12 11:28:54.527603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.655 [2024-07-12 11:28:54.527631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.655 qpair failed and we were unable to recover it. 00:24:28.655 [2024-07-12 11:28:54.527773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.655 [2024-07-12 11:28:54.527800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.655 qpair failed and we were unable to recover it. 00:24:28.655 [2024-07-12 11:28:54.527909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.655 [2024-07-12 11:28:54.527951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.655 qpair failed and we were unable to recover it. 00:24:28.655 [2024-07-12 11:28:54.528078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.655 [2024-07-12 11:28:54.528106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.655 qpair failed and we were unable to recover it. 00:24:28.655 [2024-07-12 11:28:54.528194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.655 [2024-07-12 11:28:54.528220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.655 qpair failed and we were unable to recover it. 00:24:28.655 [2024-07-12 11:28:54.528330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.655 [2024-07-12 11:28:54.528356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.655 qpair failed and we were unable to recover it. 00:24:28.655 [2024-07-12 11:28:54.528453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.655 [2024-07-12 11:28:54.528493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.655 qpair failed and we were unable to recover it. 00:24:28.655 [2024-07-12 11:28:54.528635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.655 [2024-07-12 11:28:54.528676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.655 qpair failed and we were unable to recover it. 00:24:28.655 [2024-07-12 11:28:54.528803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.655 [2024-07-12 11:28:54.528833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.655 qpair failed and we were unable to recover it. 00:24:28.655 [2024-07-12 11:28:54.528969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.655 [2024-07-12 11:28:54.528997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.655 qpair failed and we were unable to recover it. 00:24:28.655 [2024-07-12 11:28:54.529116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.655 [2024-07-12 11:28:54.529143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.655 qpair failed and we were unable to recover it. 00:24:28.655 [2024-07-12 11:28:54.529270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.655 [2024-07-12 11:28:54.529298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.655 qpair failed and we were unable to recover it. 00:24:28.655 [2024-07-12 11:28:54.529385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.655 [2024-07-12 11:28:54.529411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.655 qpair failed and we were unable to recover it. 00:24:28.655 [2024-07-12 11:28:54.529499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.655 [2024-07-12 11:28:54.529527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.655 qpair failed and we were unable to recover it. 00:24:28.655 [2024-07-12 11:28:54.529675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.655 [2024-07-12 11:28:54.529701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.655 qpair failed and we were unable to recover it. 00:24:28.655 [2024-07-12 11:28:54.529813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.655 [2024-07-12 11:28:54.529847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.655 qpair failed and we were unable to recover it. 00:24:28.655 [2024-07-12 11:28:54.529981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.655 [2024-07-12 11:28:54.530011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.655 qpair failed and we were unable to recover it. 00:24:28.655 [2024-07-12 11:28:54.530103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.655 [2024-07-12 11:28:54.530130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.655 qpair failed and we were unable to recover it. 00:24:28.655 [2024-07-12 11:28:54.530250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.655 [2024-07-12 11:28:54.530277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.655 qpair failed and we were unable to recover it. 00:24:28.655 [2024-07-12 11:28:54.530369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.655 [2024-07-12 11:28:54.530396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.655 qpair failed and we were unable to recover it. 00:24:28.655 [2024-07-12 11:28:54.530490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.655 [2024-07-12 11:28:54.530516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.655 qpair failed and we were unable to recover it. 00:24:28.655 [2024-07-12 11:28:54.530594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.655 [2024-07-12 11:28:54.530621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.655 qpair failed and we were unable to recover it. 00:24:28.655 [2024-07-12 11:28:54.530733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.655 [2024-07-12 11:28:54.530762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.655 qpair failed and we were unable to recover it. 00:24:28.655 [2024-07-12 11:28:54.530898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.655 [2024-07-12 11:28:54.530939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.655 qpair failed and we were unable to recover it. 00:24:28.655 [2024-07-12 11:28:54.531061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.655 [2024-07-12 11:28:54.531090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.655 qpair failed and we were unable to recover it. 00:24:28.655 [2024-07-12 11:28:54.531208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.655 [2024-07-12 11:28:54.531234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.655 qpair failed and we were unable to recover it. 00:24:28.655 [2024-07-12 11:28:54.531313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.655 [2024-07-12 11:28:54.531340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.655 qpair failed and we were unable to recover it. 00:24:28.655 [2024-07-12 11:28:54.531452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.655 [2024-07-12 11:28:54.531478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.655 qpair failed and we were unable to recover it. 00:24:28.656 [2024-07-12 11:28:54.531598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.656 [2024-07-12 11:28:54.531626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.656 qpair failed and we were unable to recover it. 00:24:28.656 [2024-07-12 11:28:54.531739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.656 [2024-07-12 11:28:54.531780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.656 qpair failed and we were unable to recover it. 00:24:28.656 [2024-07-12 11:28:54.531913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.656 [2024-07-12 11:28:54.531960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.656 qpair failed and we were unable to recover it. 00:24:28.656 [2024-07-12 11:28:54.532082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.656 [2024-07-12 11:28:54.532110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.656 qpair failed and we were unable to recover it. 00:24:28.656 [2024-07-12 11:28:54.532206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.656 [2024-07-12 11:28:54.532233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.656 qpair failed and we were unable to recover it. 00:24:28.656 [2024-07-12 11:28:54.532360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.656 [2024-07-12 11:28:54.532388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.656 qpair failed and we were unable to recover it. 00:24:28.656 [2024-07-12 11:28:54.532504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.656 [2024-07-12 11:28:54.532531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.656 qpair failed and we were unable to recover it. 00:24:28.656 [2024-07-12 11:28:54.532633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.656 [2024-07-12 11:28:54.532673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.656 qpair failed and we were unable to recover it. 00:24:28.656 [2024-07-12 11:28:54.532791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.656 [2024-07-12 11:28:54.532817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.656 qpair failed and we were unable to recover it. 00:24:28.656 [2024-07-12 11:28:54.532922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.656 [2024-07-12 11:28:54.532951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.656 qpair failed and we were unable to recover it. 00:24:28.656 [2024-07-12 11:28:54.533095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.656 [2024-07-12 11:28:54.533120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.656 qpair failed and we were unable to recover it. 00:24:28.656 [2024-07-12 11:28:54.533207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.656 [2024-07-12 11:28:54.533233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.656 qpair failed and we were unable to recover it. 00:24:28.656 [2024-07-12 11:28:54.533344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.656 [2024-07-12 11:28:54.533369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.656 qpair failed and we were unable to recover it. 00:24:28.656 [2024-07-12 11:28:54.533479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.656 [2024-07-12 11:28:54.533506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.656 qpair failed and we were unable to recover it. 00:24:28.656 [2024-07-12 11:28:54.533600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.656 [2024-07-12 11:28:54.533649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.656 qpair failed and we were unable to recover it. 00:24:28.656 [2024-07-12 11:28:54.533761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.656 [2024-07-12 11:28:54.533801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.656 qpair failed and we were unable to recover it. 00:24:28.656 [2024-07-12 11:28:54.533933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.656 [2024-07-12 11:28:54.533962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.656 qpair failed and we were unable to recover it. 00:24:28.656 [2024-07-12 11:28:54.534086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.656 [2024-07-12 11:28:54.534113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.656 qpair failed and we were unable to recover it. 00:24:28.656 [2024-07-12 11:28:54.534264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.656 [2024-07-12 11:28:54.534290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.656 qpair failed and we were unable to recover it. 00:24:28.656 [2024-07-12 11:28:54.534411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.656 [2024-07-12 11:28:54.534438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.656 qpair failed and we were unable to recover it. 00:24:28.656 [2024-07-12 11:28:54.534522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.656 [2024-07-12 11:28:54.534548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.656 qpair failed and we were unable to recover it. 00:24:28.656 [2024-07-12 11:28:54.534670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.656 [2024-07-12 11:28:54.534697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.656 qpair failed and we were unable to recover it. 00:24:28.656 [2024-07-12 11:28:54.534819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.656 [2024-07-12 11:28:54.534875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.656 qpair failed and we were unable to recover it. 00:24:28.656 [2024-07-12 11:28:54.535001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.656 [2024-07-12 11:28:54.535031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.656 qpair failed and we were unable to recover it. 00:24:28.656 [2024-07-12 11:28:54.535119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.656 [2024-07-12 11:28:54.535158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.656 qpair failed and we were unable to recover it. 00:24:28.656 [2024-07-12 11:28:54.535241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.656 [2024-07-12 11:28:54.535268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.656 qpair failed and we were unable to recover it. 00:24:28.656 [2024-07-12 11:28:54.535353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.656 [2024-07-12 11:28:54.535379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.656 qpair failed and we were unable to recover it. 00:24:28.656 [2024-07-12 11:28:54.535493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.656 [2024-07-12 11:28:54.535520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.656 qpair failed and we were unable to recover it. 00:24:28.656 [2024-07-12 11:28:54.535641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.656 [2024-07-12 11:28:54.535668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.656 qpair failed and we were unable to recover it. 00:24:28.656 [2024-07-12 11:28:54.535791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.656 [2024-07-12 11:28:54.535831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.656 qpair failed and we were unable to recover it. 00:24:28.656 [2024-07-12 11:28:54.535957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.656 [2024-07-12 11:28:54.535999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.656 qpair failed and we were unable to recover it. 00:24:28.656 [2024-07-12 11:28:54.536123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.656 [2024-07-12 11:28:54.536163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.656 qpair failed and we were unable to recover it. 00:24:28.656 [2024-07-12 11:28:54.536249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.656 [2024-07-12 11:28:54.536276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.656 qpair failed and we were unable to recover it. 00:24:28.656 [2024-07-12 11:28:54.536362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.656 [2024-07-12 11:28:54.536388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.656 qpair failed and we were unable to recover it. 00:24:28.656 [2024-07-12 11:28:54.536501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.656 [2024-07-12 11:28:54.536528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.656 qpair failed and we were unable to recover it. 00:24:28.656 [2024-07-12 11:28:54.536614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.656 [2024-07-12 11:28:54.536640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.656 qpair failed and we were unable to recover it. 00:24:28.656 [2024-07-12 11:28:54.536754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.656 [2024-07-12 11:28:54.536781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.656 qpair failed and we were unable to recover it. 00:24:28.656 [2024-07-12 11:28:54.536897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.656 [2024-07-12 11:28:54.536924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.656 qpair failed and we were unable to recover it. 00:24:28.656 [2024-07-12 11:28:54.537045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.656 [2024-07-12 11:28:54.537072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.656 qpair failed and we were unable to recover it. 00:24:28.656 [2024-07-12 11:28:54.537214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.656 [2024-07-12 11:28:54.537240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.656 qpair failed and we were unable to recover it. 00:24:28.656 [2024-07-12 11:28:54.537365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.656 [2024-07-12 11:28:54.537395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.657 qpair failed and we were unable to recover it. 00:24:28.657 [2024-07-12 11:28:54.537485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-07-12 11:28:54.537513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.657 qpair failed and we were unable to recover it. 00:24:28.657 [2024-07-12 11:28:54.537623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-07-12 11:28:54.537651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.657 qpair failed and we were unable to recover it. 00:24:28.657 [2024-07-12 11:28:54.537767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-07-12 11:28:54.537794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.657 qpair failed and we were unable to recover it. 00:24:28.657 [2024-07-12 11:28:54.537885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-07-12 11:28:54.537913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.657 qpair failed and we were unable to recover it. 00:24:28.657 [2024-07-12 11:28:54.538033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-07-12 11:28:54.538060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.657 qpair failed and we were unable to recover it. 00:24:28.657 [2024-07-12 11:28:54.538170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-07-12 11:28:54.538196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.657 qpair failed and we were unable to recover it. 00:24:28.657 [2024-07-12 11:28:54.538285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-07-12 11:28:54.538312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.657 qpair failed and we were unable to recover it. 00:24:28.657 [2024-07-12 11:28:54.538403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-07-12 11:28:54.538430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.657 qpair failed and we were unable to recover it. 00:24:28.657 [2024-07-12 11:28:54.538554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-07-12 11:28:54.538583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.657 qpair failed and we were unable to recover it. 00:24:28.657 [2024-07-12 11:28:54.538707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-07-12 11:28:54.538733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.657 qpair failed and we were unable to recover it. 00:24:28.657 [2024-07-12 11:28:54.538880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-07-12 11:28:54.538907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.657 qpair failed and we were unable to recover it. 00:24:28.657 [2024-07-12 11:28:54.538990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-07-12 11:28:54.539017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.657 qpair failed and we were unable to recover it. 00:24:28.657 [2024-07-12 11:28:54.539106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-07-12 11:28:54.539134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.657 qpair failed and we were unable to recover it. 00:24:28.657 [2024-07-12 11:28:54.539244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-07-12 11:28:54.539278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.657 qpair failed and we were unable to recover it. 00:24:28.657 [2024-07-12 11:28:54.539398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-07-12 11:28:54.539426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.657 qpair failed and we were unable to recover it. 00:24:28.657 [2024-07-12 11:28:54.539513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-07-12 11:28:54.539539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.657 qpair failed and we were unable to recover it. 00:24:28.657 [2024-07-12 11:28:54.539657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-07-12 11:28:54.539683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.657 qpair failed and we were unable to recover it. 00:24:28.657 [2024-07-12 11:28:54.539825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-07-12 11:28:54.539852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.657 qpair failed and we were unable to recover it. 00:24:28.657 [2024-07-12 11:28:54.539979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-07-12 11:28:54.540006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.657 qpair failed and we were unable to recover it. 00:24:28.657 [2024-07-12 11:28:54.540123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-07-12 11:28:54.540148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.657 qpair failed and we were unable to recover it. 00:24:28.657 [2024-07-12 11:28:54.540260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-07-12 11:28:54.540286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.657 qpair failed and we were unable to recover it. 00:24:28.657 [2024-07-12 11:28:54.540391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-07-12 11:28:54.540417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.657 qpair failed and we were unable to recover it. 00:24:28.657 [2024-07-12 11:28:54.540544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-07-12 11:28:54.540570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.657 qpair failed and we were unable to recover it. 00:24:28.657 [2024-07-12 11:28:54.540684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-07-12 11:28:54.540710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.657 qpair failed and we were unable to recover it. 00:24:28.657 [2024-07-12 11:28:54.540799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-07-12 11:28:54.540826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.657 qpair failed and we were unable to recover it. 00:24:28.657 [2024-07-12 11:28:54.540947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-07-12 11:28:54.540986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.657 qpair failed and we were unable to recover it. 00:24:28.657 [2024-07-12 11:28:54.541088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-07-12 11:28:54.541127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.657 qpair failed and we were unable to recover it. 00:24:28.657 [2024-07-12 11:28:54.541281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-07-12 11:28:54.541309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.657 qpair failed and we were unable to recover it. 00:24:28.657 [2024-07-12 11:28:54.541432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-07-12 11:28:54.541459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.657 qpair failed and we were unable to recover it. 00:24:28.657 [2024-07-12 11:28:54.541551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-07-12 11:28:54.541577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.657 qpair failed and we were unable to recover it. 00:24:28.657 [2024-07-12 11:28:54.541737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-07-12 11:28:54.541777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.657 qpair failed and we were unable to recover it. 00:24:28.657 [2024-07-12 11:28:54.541897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-07-12 11:28:54.541926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.657 qpair failed and we were unable to recover it. 00:24:28.657 [2024-07-12 11:28:54.542016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-07-12 11:28:54.542041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.657 qpair failed and we were unable to recover it. 00:24:28.657 [2024-07-12 11:28:54.542150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-07-12 11:28:54.542176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.657 qpair failed and we were unable to recover it. 00:24:28.657 [2024-07-12 11:28:54.542254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-07-12 11:28:54.542280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.657 qpair failed and we were unable to recover it. 00:24:28.657 [2024-07-12 11:28:54.542390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-07-12 11:28:54.542416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.657 qpair failed and we were unable to recover it. 00:24:28.657 [2024-07-12 11:28:54.542524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-07-12 11:28:54.542550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.657 qpair failed and we were unable to recover it. 00:24:28.657 [2024-07-12 11:28:54.542655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-07-12 11:28:54.542696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.657 qpair failed and we were unable to recover it. 00:24:28.657 [2024-07-12 11:28:54.542829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-07-12 11:28:54.542873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.657 qpair failed and we were unable to recover it. 00:24:28.657 [2024-07-12 11:28:54.542971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.657 [2024-07-12 11:28:54.542998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.658 qpair failed and we were unable to recover it. 00:24:28.658 [2024-07-12 11:28:54.543114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-07-12 11:28:54.543143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.658 qpair failed and we were unable to recover it. 00:24:28.658 [2024-07-12 11:28:54.543230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-07-12 11:28:54.543256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.658 qpair failed and we were unable to recover it. 00:24:28.658 [2024-07-12 11:28:54.543346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-07-12 11:28:54.543376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.658 qpair failed and we were unable to recover it. 00:24:28.658 [2024-07-12 11:28:54.543467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-07-12 11:28:54.543494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.658 qpair failed and we were unable to recover it. 00:24:28.658 [2024-07-12 11:28:54.543573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-07-12 11:28:54.543603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.658 qpair failed and we were unable to recover it. 00:24:28.658 [2024-07-12 11:28:54.543686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-07-12 11:28:54.543713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.658 qpair failed and we were unable to recover it. 00:24:28.658 [2024-07-12 11:28:54.543801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-07-12 11:28:54.543829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.658 qpair failed and we were unable to recover it. 00:24:28.658 [2024-07-12 11:28:54.543954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-07-12 11:28:54.543981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.658 qpair failed and we were unable to recover it. 00:24:28.658 [2024-07-12 11:28:54.544062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-07-12 11:28:54.544089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.658 qpair failed and we were unable to recover it. 00:24:28.658 [2024-07-12 11:28:54.544169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-07-12 11:28:54.544195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.658 qpair failed and we were unable to recover it. 00:24:28.658 [2024-07-12 11:28:54.544281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-07-12 11:28:54.544309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.658 qpair failed and we were unable to recover it. 00:24:28.658 [2024-07-12 11:28:54.544390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-07-12 11:28:54.544418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.658 qpair failed and we were unable to recover it. 00:24:28.658 [2024-07-12 11:28:54.544531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-07-12 11:28:54.544541] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:28.658 [2024-07-12 11:28:54.544559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.658 qpair failed and we were unable to recover it. 00:24:28.658 [2024-07-12 11:28:54.544574] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:28.658 [2024-07-12 11:28:54.544594] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:28.658 [2024-07-12 11:28:54.544607] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:28.658 [2024-07-12 11:28:54.544617] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:28.658 [2024-07-12 11:28:54.544730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-07-12 11:28:54.544785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.658 qpair failed and we were unable to recover it. 00:24:28.658 [2024-07-12 11:28:54.544778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:24:28.658 [2024-07-12 11:28:54.544832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:24:28.658 [2024-07-12 11:28:54.544923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-07-12 11:28:54.544892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:24:28.658 [2024-07-12 11:28:54.544949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.658 [2024-07-12 11:28:54.544897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:28.658 qpair failed and we were unable to recover it. 00:24:28.658 [2024-07-12 11:28:54.545045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-07-12 11:28:54.545070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.658 qpair failed and we were unable to recover it. 00:24:28.658 [2024-07-12 11:28:54.545188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-07-12 11:28:54.545213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.658 qpair failed and we were unable to recover it. 00:24:28.658 [2024-07-12 11:28:54.545304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-07-12 11:28:54.545330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.658 qpair failed and we were unable to recover it. 00:24:28.658 [2024-07-12 11:28:54.545453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-07-12 11:28:54.545478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.658 qpair failed and we were unable to recover it. 00:24:28.658 [2024-07-12 11:28:54.545592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-07-12 11:28:54.545619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.658 qpair failed and we were unable to recover it. 00:24:28.658 [2024-07-12 11:28:54.545747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-07-12 11:28:54.545775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.658 qpair failed and we were unable to recover it. 00:24:28.658 [2024-07-12 11:28:54.545861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-07-12 11:28:54.545902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.658 qpair failed and we were unable to recover it. 00:24:28.658 [2024-07-12 11:28:54.545988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-07-12 11:28:54.546014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.658 qpair failed and we were unable to recover it. 00:24:28.658 [2024-07-12 11:28:54.546124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-07-12 11:28:54.546150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.658 qpair failed and we were unable to recover it. 00:24:28.658 [2024-07-12 11:28:54.546266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-07-12 11:28:54.546292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.658 qpair failed and we were unable to recover it. 00:24:28.658 [2024-07-12 11:28:54.546407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-07-12 11:28:54.546433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.658 qpair failed and we were unable to recover it. 00:24:28.658 [2024-07-12 11:28:54.546544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-07-12 11:28:54.546570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.658 qpair failed and we were unable to recover it. 00:24:28.658 [2024-07-12 11:28:54.546657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-07-12 11:28:54.546686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.658 qpair failed and we were unable to recover it. 00:24:28.658 [2024-07-12 11:28:54.546793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-07-12 11:28:54.546819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.658 qpair failed and we were unable to recover it. 00:24:28.658 [2024-07-12 11:28:54.546948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-07-12 11:28:54.546976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.658 qpair failed and we were unable to recover it. 00:24:28.658 [2024-07-12 11:28:54.547090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-07-12 11:28:54.547115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.658 qpair failed and we were unable to recover it. 00:24:28.658 [2024-07-12 11:28:54.547200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-07-12 11:28:54.547226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.658 qpair failed and we were unable to recover it. 00:24:28.658 [2024-07-12 11:28:54.547310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-07-12 11:28:54.547336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.658 qpair failed and we were unable to recover it. 00:24:28.658 [2024-07-12 11:28:54.547449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-07-12 11:28:54.547475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.658 qpair failed and we were unable to recover it. 00:24:28.658 [2024-07-12 11:28:54.547595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-07-12 11:28:54.547624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.658 qpair failed and we were unable to recover it. 00:24:28.658 [2024-07-12 11:28:54.547722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.658 [2024-07-12 11:28:54.547747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.658 qpair failed and we were unable to recover it. 00:24:28.658 [2024-07-12 11:28:54.547841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.659 [2024-07-12 11:28:54.547884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.659 qpair failed and we were unable to recover it. 00:24:28.659 [2024-07-12 11:28:54.547975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.659 [2024-07-12 11:28:54.548003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.659 qpair failed and we were unable to recover it. 00:24:28.659 [2024-07-12 11:28:54.548083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.659 [2024-07-12 11:28:54.548108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.659 qpair failed and we were unable to recover it. 00:24:28.659 [2024-07-12 11:28:54.548194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.659 [2024-07-12 11:28:54.548220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.659 qpair failed and we were unable to recover it. 00:24:28.659 [2024-07-12 11:28:54.548302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.659 [2024-07-12 11:28:54.548328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.659 qpair failed and we were unable to recover it. 00:24:28.659 [2024-07-12 11:28:54.548458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.659 [2024-07-12 11:28:54.548499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.659 qpair failed and we were unable to recover it. 00:24:28.659 [2024-07-12 11:28:54.548611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.659 [2024-07-12 11:28:54.548639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.659 qpair failed and we were unable to recover it. 00:24:28.659 [2024-07-12 11:28:54.548756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.659 [2024-07-12 11:28:54.548784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.659 qpair failed and we were unable to recover it. 00:24:28.659 [2024-07-12 11:28:54.548880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.659 [2024-07-12 11:28:54.548907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.659 qpair failed and we were unable to recover it. 00:24:28.659 [2024-07-12 11:28:54.548991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.659 [2024-07-12 11:28:54.549018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.659 qpair failed and we were unable to recover it. 00:24:28.659 [2024-07-12 11:28:54.549102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.659 [2024-07-12 11:28:54.549128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.659 qpair failed and we were unable to recover it. 00:24:28.659 [2024-07-12 11:28:54.549209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.659 [2024-07-12 11:28:54.549236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.659 qpair failed and we were unable to recover it. 00:24:28.659 [2024-07-12 11:28:54.549314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.659 [2024-07-12 11:28:54.549340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.659 qpair failed and we were unable to recover it. 00:24:28.659 [2024-07-12 11:28:54.549420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.659 [2024-07-12 11:28:54.549447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.659 qpair failed and we were unable to recover it. 00:24:28.659 [2024-07-12 11:28:54.549564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.659 [2024-07-12 11:28:54.549596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.659 qpair failed and we were unable to recover it. 00:24:28.659 [2024-07-12 11:28:54.549701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.659 [2024-07-12 11:28:54.549741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.659 qpair failed and we were unable to recover it. 00:24:28.659 [2024-07-12 11:28:54.549858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.659 [2024-07-12 11:28:54.549892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.659 qpair failed and we were unable to recover it. 00:24:28.659 [2024-07-12 11:28:54.550009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.659 [2024-07-12 11:28:54.550034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.659 qpair failed and we were unable to recover it. 00:24:28.659 [2024-07-12 11:28:54.550146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.659 [2024-07-12 11:28:54.550171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.659 qpair failed and we were unable to recover it. 00:24:28.659 [2024-07-12 11:28:54.550256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.659 [2024-07-12 11:28:54.550282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.659 qpair failed and we were unable to recover it. 00:24:28.659 [2024-07-12 11:28:54.550395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.659 [2024-07-12 11:28:54.550421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.659 qpair failed and we were unable to recover it. 00:24:28.659 [2024-07-12 11:28:54.550565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.659 [2024-07-12 11:28:54.550592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.659 qpair failed and we were unable to recover it. 00:24:28.659 [2024-07-12 11:28:54.550718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.659 [2024-07-12 11:28:54.550758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.659 qpair failed and we were unable to recover it. 00:24:28.659 [2024-07-12 11:28:54.550852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.659 [2024-07-12 11:28:54.550890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.659 qpair failed and we were unable to recover it. 00:24:28.659 [2024-07-12 11:28:54.550989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.659 [2024-07-12 11:28:54.551016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.659 qpair failed and we were unable to recover it. 00:24:28.659 [2024-07-12 11:28:54.551101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.659 [2024-07-12 11:28:54.551127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.659 qpair failed and we were unable to recover it. 00:24:28.659 [2024-07-12 11:28:54.551240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.659 [2024-07-12 11:28:54.551267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.659 qpair failed and we were unable to recover it. 00:24:28.659 [2024-07-12 11:28:54.551385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.659 [2024-07-12 11:28:54.551413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.659 qpair failed and we were unable to recover it. 00:24:28.659 [2024-07-12 11:28:54.551506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.659 [2024-07-12 11:28:54.551535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.659 qpair failed and we were unable to recover it. 00:24:28.659 [2024-07-12 11:28:54.551656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.659 [2024-07-12 11:28:54.551696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.659 qpair failed and we were unable to recover it. 00:24:28.659 [2024-07-12 11:28:54.551793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.659 [2024-07-12 11:28:54.551820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.659 qpair failed and we were unable to recover it. 00:24:28.659 [2024-07-12 11:28:54.551941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.659 [2024-07-12 11:28:54.551968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.659 qpair failed and we were unable to recover it. 00:24:28.659 [2024-07-12 11:28:54.552048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.659 [2024-07-12 11:28:54.552074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.659 qpair failed and we were unable to recover it. 00:24:28.659 [2024-07-12 11:28:54.552154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.659 [2024-07-12 11:28:54.552180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.659 qpair failed and we were unable to recover it. 00:24:28.659 [2024-07-12 11:28:54.552258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.659 [2024-07-12 11:28:54.552285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.660 qpair failed and we were unable to recover it. 00:24:28.660 [2024-07-12 11:28:54.552388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.660 [2024-07-12 11:28:54.552415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.660 qpair failed and we were unable to recover it. 00:24:28.660 [2024-07-12 11:28:54.552519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.660 [2024-07-12 11:28:54.552547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.660 qpair failed and we were unable to recover it. 00:24:28.660 [2024-07-12 11:28:54.552640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.660 [2024-07-12 11:28:54.552666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.660 qpair failed and we were unable to recover it. 00:24:28.660 [2024-07-12 11:28:54.552751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.660 [2024-07-12 11:28:54.552780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.660 qpair failed and we were unable to recover it. 00:24:28.660 [2024-07-12 11:28:54.552873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.660 [2024-07-12 11:28:54.552899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.660 qpair failed and we were unable to recover it. 00:24:28.660 [2024-07-12 11:28:54.552989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.660 [2024-07-12 11:28:54.553015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.660 qpair failed and we were unable to recover it. 00:24:28.660 [2024-07-12 11:28:54.553100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.660 [2024-07-12 11:28:54.553132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.660 qpair failed and we were unable to recover it. 00:24:28.660 [2024-07-12 11:28:54.553221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.660 [2024-07-12 11:28:54.553249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.660 qpair failed and we were unable to recover it. 00:24:28.660 [2024-07-12 11:28:54.553350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.660 [2024-07-12 11:28:54.553377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.660 qpair failed and we were unable to recover it. 00:24:28.660 [2024-07-12 11:28:54.553460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.660 [2024-07-12 11:28:54.553485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.660 qpair failed and we were unable to recover it. 00:24:28.660 [2024-07-12 11:28:54.553581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.660 [2024-07-12 11:28:54.553608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.660 qpair failed and we were unable to recover it. 00:24:28.660 [2024-07-12 11:28:54.553699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.660 [2024-07-12 11:28:54.553726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.660 qpair failed and we were unable to recover it. 00:24:28.660 [2024-07-12 11:28:54.553808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.660 [2024-07-12 11:28:54.553834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.660 qpair failed and we were unable to recover it. 00:24:28.660 [2024-07-12 11:28:54.553957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.660 [2024-07-12 11:28:54.553984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.660 qpair failed and we were unable to recover it. 00:24:28.660 [2024-07-12 11:28:54.554065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.660 [2024-07-12 11:28:54.554091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.660 qpair failed and we were unable to recover it. 00:24:28.660 [2024-07-12 11:28:54.554183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.660 [2024-07-12 11:28:54.554210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.660 qpair failed and we were unable to recover it. 00:24:28.660 [2024-07-12 11:28:54.554302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.660 [2024-07-12 11:28:54.554331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.660 qpair failed and we were unable to recover it. 00:24:28.660 [2024-07-12 11:28:54.554460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.660 [2024-07-12 11:28:54.554501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.660 qpair failed and we were unable to recover it. 00:24:28.660 [2024-07-12 11:28:54.554595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.660 [2024-07-12 11:28:54.554622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.660 qpair failed and we were unable to recover it. 00:24:28.660 [2024-07-12 11:28:54.554701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.660 [2024-07-12 11:28:54.554727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.660 qpair failed and we were unable to recover it. 00:24:28.660 [2024-07-12 11:28:54.554842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.660 [2024-07-12 11:28:54.554878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.660 qpair failed and we were unable to recover it. 00:24:28.660 [2024-07-12 11:28:54.554964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.660 [2024-07-12 11:28:54.554990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.660 qpair failed and we were unable to recover it. 00:24:28.660 [2024-07-12 11:28:54.555087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.660 [2024-07-12 11:28:54.555118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.660 qpair failed and we were unable to recover it. 00:24:28.660 [2024-07-12 11:28:54.555210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.660 [2024-07-12 11:28:54.555235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.660 qpair failed and we were unable to recover it. 00:24:28.660 [2024-07-12 11:28:54.555321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.660 [2024-07-12 11:28:54.555349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.660 qpair failed and we were unable to recover it. 00:24:28.660 [2024-07-12 11:28:54.555435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.660 [2024-07-12 11:28:54.555461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.660 qpair failed and we were unable to recover it. 00:24:28.660 [2024-07-12 11:28:54.555549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.660 [2024-07-12 11:28:54.555577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.660 qpair failed and we were unable to recover it. 00:24:28.660 [2024-07-12 11:28:54.555660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.660 [2024-07-12 11:28:54.555686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.660 qpair failed and we were unable to recover it. 00:24:28.660 [2024-07-12 11:28:54.555803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.660 [2024-07-12 11:28:54.555830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.660 qpair failed and we were unable to recover it. 00:24:28.660 [2024-07-12 11:28:54.555951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.660 [2024-07-12 11:28:54.555980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.660 qpair failed and we were unable to recover it. 00:24:28.660 [2024-07-12 11:28:54.556067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.660 [2024-07-12 11:28:54.556093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.660 qpair failed and we were unable to recover it. 00:24:28.660 [2024-07-12 11:28:54.556187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.660 [2024-07-12 11:28:54.556213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.660 qpair failed and we were unable to recover it. 00:24:28.660 [2024-07-12 11:28:54.556301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.660 [2024-07-12 11:28:54.556327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.660 qpair failed and we were unable to recover it. 00:24:28.660 [2024-07-12 11:28:54.556440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.660 [2024-07-12 11:28:54.556466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.660 qpair failed and we were unable to recover it. 00:24:28.660 [2024-07-12 11:28:54.556580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.660 [2024-07-12 11:28:54.556607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.660 qpair failed and we were unable to recover it. 00:24:28.660 [2024-07-12 11:28:54.556685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.660 [2024-07-12 11:28:54.556714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.660 qpair failed and we were unable to recover it. 00:24:28.660 [2024-07-12 11:28:54.556838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.660 [2024-07-12 11:28:54.556884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.660 qpair failed and we were unable to recover it. 00:24:28.660 [2024-07-12 11:28:54.556973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.660 [2024-07-12 11:28:54.557000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.660 qpair failed and we were unable to recover it. 00:24:28.660 [2024-07-12 11:28:54.557096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.660 [2024-07-12 11:28:54.557122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.660 qpair failed and we were unable to recover it. 00:24:28.660 [2024-07-12 11:28:54.557249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.660 [2024-07-12 11:28:54.557275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.660 qpair failed and we were unable to recover it. 00:24:28.661 [2024-07-12 11:28:54.557393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.661 [2024-07-12 11:28:54.557423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.661 qpair failed and we were unable to recover it. 00:24:28.661 [2024-07-12 11:28:54.557543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.661 [2024-07-12 11:28:54.557571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.661 qpair failed and we were unable to recover it. 00:24:28.661 [2024-07-12 11:28:54.557652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.661 [2024-07-12 11:28:54.557678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.661 qpair failed and we were unable to recover it. 00:24:28.661 [2024-07-12 11:28:54.557788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.661 [2024-07-12 11:28:54.557814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.661 qpair failed and we were unable to recover it. 00:24:28.661 [2024-07-12 11:28:54.557905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.661 [2024-07-12 11:28:54.557932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.661 qpair failed and we were unable to recover it. 00:24:28.661 [2024-07-12 11:28:54.558012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.661 [2024-07-12 11:28:54.558037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.661 qpair failed and we were unable to recover it. 00:24:28.661 [2024-07-12 11:28:54.558146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.661 [2024-07-12 11:28:54.558184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.661 qpair failed and we were unable to recover it. 00:24:28.661 [2024-07-12 11:28:54.558299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.661 [2024-07-12 11:28:54.558325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.661 qpair failed and we were unable to recover it. 00:24:28.661 [2024-07-12 11:28:54.558431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.661 [2024-07-12 11:28:54.558457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.661 qpair failed and we were unable to recover it. 00:24:28.661 [2024-07-12 11:28:54.558545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.661 [2024-07-12 11:28:54.558572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.661 qpair failed and we were unable to recover it. 00:24:28.661 [2024-07-12 11:28:54.558658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.661 [2024-07-12 11:28:54.558686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.661 qpair failed and we were unable to recover it. 00:24:28.661 [2024-07-12 11:28:54.558783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.661 [2024-07-12 11:28:54.558810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.661 qpair failed and we were unable to recover it. 00:24:28.661 [2024-07-12 11:28:54.558919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.661 [2024-07-12 11:28:54.558959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.661 qpair failed and we were unable to recover it. 00:24:28.661 [2024-07-12 11:28:54.559055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.661 [2024-07-12 11:28:54.559082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.661 qpair failed and we were unable to recover it. 00:24:28.661 [2024-07-12 11:28:54.559174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.661 [2024-07-12 11:28:54.559200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.661 qpair failed and we were unable to recover it. 00:24:28.661 [2024-07-12 11:28:54.559298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.661 [2024-07-12 11:28:54.559324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.661 qpair failed and we were unable to recover it. 00:24:28.661 [2024-07-12 11:28:54.559406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.661 [2024-07-12 11:28:54.559433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.661 qpair failed and we were unable to recover it. 00:24:28.661 [2024-07-12 11:28:54.559528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.661 [2024-07-12 11:28:54.559568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.661 qpair failed and we were unable to recover it. 00:24:28.661 [2024-07-12 11:28:54.559659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.661 [2024-07-12 11:28:54.559688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.661 qpair failed and we were unable to recover it. 00:24:28.661 [2024-07-12 11:28:54.559769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.661 [2024-07-12 11:28:54.559795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.661 qpair failed and we were unable to recover it. 00:24:28.661 [2024-07-12 11:28:54.559903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.661 [2024-07-12 11:28:54.559931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.661 qpair failed and we were unable to recover it. 00:24:28.661 [2024-07-12 11:28:54.560044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.661 [2024-07-12 11:28:54.560071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.661 qpair failed and we were unable to recover it. 00:24:28.661 [2024-07-12 11:28:54.560167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.661 [2024-07-12 11:28:54.560193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.661 qpair failed and we were unable to recover it. 00:24:28.661 [2024-07-12 11:28:54.560311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.661 [2024-07-12 11:28:54.560337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.661 qpair failed and we were unable to recover it. 00:24:28.661 [2024-07-12 11:28:54.560449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.661 [2024-07-12 11:28:54.560475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.661 qpair failed and we were unable to recover it. 00:24:28.661 [2024-07-12 11:28:54.560567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.661 [2024-07-12 11:28:54.560593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.661 qpair failed and we were unable to recover it. 00:24:28.661 [2024-07-12 11:28:54.560686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.661 [2024-07-12 11:28:54.560715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.661 qpair failed and we were unable to recover it. 00:24:28.661 [2024-07-12 11:28:54.560829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.661 [2024-07-12 11:28:54.560876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.661 qpair failed and we were unable to recover it. 00:24:28.661 [2024-07-12 11:28:54.560974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.661 [2024-07-12 11:28:54.561000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.661 qpair failed and we were unable to recover it. 00:24:28.661 [2024-07-12 11:28:54.561086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.661 [2024-07-12 11:28:54.561112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.661 qpair failed and we were unable to recover it. 00:24:28.661 [2024-07-12 11:28:54.561197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.661 [2024-07-12 11:28:54.561222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.661 qpair failed and we were unable to recover it. 00:24:28.661 [2024-07-12 11:28:54.561315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.661 [2024-07-12 11:28:54.561342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.661 qpair failed and we were unable to recover it. 00:24:28.661 [2024-07-12 11:28:54.561457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.661 [2024-07-12 11:28:54.561483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.661 qpair failed and we were unable to recover it. 00:24:28.661 [2024-07-12 11:28:54.561585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.661 [2024-07-12 11:28:54.561625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.661 qpair failed and we were unable to recover it. 00:24:28.661 [2024-07-12 11:28:54.561727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.661 [2024-07-12 11:28:54.561766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.661 qpair failed and we were unable to recover it. 00:24:28.661 [2024-07-12 11:28:54.561860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.661 [2024-07-12 11:28:54.561895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.661 qpair failed and we were unable to recover it. 00:24:28.661 [2024-07-12 11:28:54.561986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.661 [2024-07-12 11:28:54.562014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.661 qpair failed and we were unable to recover it. 00:24:28.661 [2024-07-12 11:28:54.562100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.661 [2024-07-12 11:28:54.562127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.661 qpair failed and we were unable to recover it. 00:24:28.661 [2024-07-12 11:28:54.562221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.661 [2024-07-12 11:28:54.562247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.661 qpair failed and we were unable to recover it. 00:24:28.661 [2024-07-12 11:28:54.562333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.661 [2024-07-12 11:28:54.562362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.661 qpair failed and we were unable to recover it. 00:24:28.661 [2024-07-12 11:28:54.562484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.662 [2024-07-12 11:28:54.562511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.662 qpair failed and we were unable to recover it. 00:24:28.662 [2024-07-12 11:28:54.562609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.662 [2024-07-12 11:28:54.562636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.662 qpair failed and we were unable to recover it. 00:24:28.662 [2024-07-12 11:28:54.562721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.662 [2024-07-12 11:28:54.562748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.662 qpair failed and we were unable to recover it. 00:24:28.662 [2024-07-12 11:28:54.562833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.662 [2024-07-12 11:28:54.562876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.662 qpair failed and we were unable to recover it. 00:24:28.662 [2024-07-12 11:28:54.562958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.662 [2024-07-12 11:28:54.562984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.662 qpair failed and we were unable to recover it. 00:24:28.662 [2024-07-12 11:28:54.563070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.662 [2024-07-12 11:28:54.563096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.662 qpair failed and we were unable to recover it. 00:24:28.662 [2024-07-12 11:28:54.563189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.662 [2024-07-12 11:28:54.563228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.662 qpair failed and we were unable to recover it. 00:24:28.662 [2024-07-12 11:28:54.563304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.662 [2024-07-12 11:28:54.563329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.662 qpair failed and we were unable to recover it. 00:24:28.662 [2024-07-12 11:28:54.563405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.662 [2024-07-12 11:28:54.563430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.662 qpair failed and we were unable to recover it. 00:24:28.662 [2024-07-12 11:28:54.563519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.662 [2024-07-12 11:28:54.563547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.662 qpair failed and we were unable to recover it. 00:24:28.662 [2024-07-12 11:28:54.563633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.662 [2024-07-12 11:28:54.563659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.662 qpair failed and we were unable to recover it. 00:24:28.662 [2024-07-12 11:28:54.563745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.662 [2024-07-12 11:28:54.563771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.662 qpair failed and we were unable to recover it. 00:24:28.662 [2024-07-12 11:28:54.563870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.662 [2024-07-12 11:28:54.563897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.662 qpair failed and we were unable to recover it. 00:24:28.662 [2024-07-12 11:28:54.563979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.662 [2024-07-12 11:28:54.564005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.662 qpair failed and we were unable to recover it. 00:24:28.662 [2024-07-12 11:28:54.564084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.662 [2024-07-12 11:28:54.564110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.662 qpair failed and we were unable to recover it. 00:24:28.662 [2024-07-12 11:28:54.564207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.662 [2024-07-12 11:28:54.564234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.662 qpair failed and we were unable to recover it. 00:24:28.662 [2024-07-12 11:28:54.564318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.662 [2024-07-12 11:28:54.564344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.662 qpair failed and we were unable to recover it. 00:24:28.662 [2024-07-12 11:28:54.564435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.662 [2024-07-12 11:28:54.564464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.662 qpair failed and we were unable to recover it. 00:24:28.662 [2024-07-12 11:28:54.564553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.662 [2024-07-12 11:28:54.564579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.662 qpair failed and we were unable to recover it. 00:24:28.662 [2024-07-12 11:28:54.564688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.662 [2024-07-12 11:28:54.564714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.662 qpair failed and we were unable to recover it. 00:24:28.662 [2024-07-12 11:28:54.564836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.662 [2024-07-12 11:28:54.564877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.662 qpair failed and we were unable to recover it. 00:24:28.662 [2024-07-12 11:28:54.564957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.662 [2024-07-12 11:28:54.564983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.662 qpair failed and we were unable to recover it. 00:24:28.662 [2024-07-12 11:28:54.565066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.662 [2024-07-12 11:28:54.565093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.662 qpair failed and we were unable to recover it. 00:24:28.662 [2024-07-12 11:28:54.565184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.662 [2024-07-12 11:28:54.565210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.662 qpair failed and we were unable to recover it. 00:24:28.662 [2024-07-12 11:28:54.565294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.662 [2024-07-12 11:28:54.565320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.662 qpair failed and we were unable to recover it. 00:24:28.662 [2024-07-12 11:28:54.565412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.662 [2024-07-12 11:28:54.565439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.662 qpair failed and we were unable to recover it. 00:24:28.662 [2024-07-12 11:28:54.565527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.662 [2024-07-12 11:28:54.565554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.662 qpair failed and we were unable to recover it. 00:24:28.662 [2024-07-12 11:28:54.565665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.662 [2024-07-12 11:28:54.565691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.662 qpair failed and we were unable to recover it. 00:24:28.662 [2024-07-12 11:28:54.565774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.662 [2024-07-12 11:28:54.565800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.662 qpair failed and we were unable to recover it. 00:24:28.662 [2024-07-12 11:28:54.565892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.662 [2024-07-12 11:28:54.565919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.662 qpair failed and we were unable to recover it. 00:24:28.662 [2024-07-12 11:28:54.566027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.662 [2024-07-12 11:28:54.566053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.662 qpair failed and we were unable to recover it. 00:24:28.662 [2024-07-12 11:28:54.566143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.662 [2024-07-12 11:28:54.566174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.662 qpair failed and we were unable to recover it. 00:24:28.662 [2024-07-12 11:28:54.566258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.662 [2024-07-12 11:28:54.566284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.662 qpair failed and we were unable to recover it. 00:24:28.662 [2024-07-12 11:28:54.566378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.662 [2024-07-12 11:28:54.566409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.662 qpair failed and we were unable to recover it. 00:24:28.662 [2024-07-12 11:28:54.566501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.662 [2024-07-12 11:28:54.566529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.662 qpair failed and we were unable to recover it. 00:24:28.662 [2024-07-12 11:28:54.566608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.662 [2024-07-12 11:28:54.566635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.662 qpair failed and we were unable to recover it. 00:24:28.662 [2024-07-12 11:28:54.566753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.662 [2024-07-12 11:28:54.566779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.662 qpair failed and we were unable to recover it. 00:24:28.662 [2024-07-12 11:28:54.566879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.662 [2024-07-12 11:28:54.566906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.662 qpair failed and we were unable to recover it. 00:24:28.662 [2024-07-12 11:28:54.566994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.662 [2024-07-12 11:28:54.567021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.662 qpair failed and we were unable to recover it. 00:24:28.662 [2024-07-12 11:28:54.567101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.662 [2024-07-12 11:28:54.567128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.662 qpair failed and we were unable to recover it. 00:24:28.662 [2024-07-12 11:28:54.567213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.663 [2024-07-12 11:28:54.567244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.663 qpair failed and we were unable to recover it. 00:24:28.663 [2024-07-12 11:28:54.567359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.663 [2024-07-12 11:28:54.567386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.663 qpair failed and we were unable to recover it. 00:24:28.663 [2024-07-12 11:28:54.567468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.663 [2024-07-12 11:28:54.567494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.663 qpair failed and we were unable to recover it. 00:24:28.663 [2024-07-12 11:28:54.567579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.663 [2024-07-12 11:28:54.567605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.663 qpair failed and we were unable to recover it. 00:24:28.663 [2024-07-12 11:28:54.567687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.663 [2024-07-12 11:28:54.567713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.663 qpair failed and we were unable to recover it. 00:24:28.663 [2024-07-12 11:28:54.567801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.663 [2024-07-12 11:28:54.567829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.663 qpair failed and we were unable to recover it. 00:24:28.663 [2024-07-12 11:28:54.567927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.663 [2024-07-12 11:28:54.567958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.663 qpair failed and we were unable to recover it. 00:24:28.663 [2024-07-12 11:28:54.568042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.663 [2024-07-12 11:28:54.568069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.663 qpair failed and we were unable to recover it. 00:24:28.663 [2024-07-12 11:28:54.568150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.663 [2024-07-12 11:28:54.568183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.663 qpair failed and we were unable to recover it. 00:24:28.663 [2024-07-12 11:28:54.568258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.663 [2024-07-12 11:28:54.568284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.663 qpair failed and we were unable to recover it. 00:24:28.663 [2024-07-12 11:28:54.568369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.663 [2024-07-12 11:28:54.568399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.663 qpair failed and we were unable to recover it. 00:24:28.663 [2024-07-12 11:28:54.568512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.663 [2024-07-12 11:28:54.568538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.663 qpair failed and we were unable to recover it. 00:24:28.663 [2024-07-12 11:28:54.568628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.663 [2024-07-12 11:28:54.568654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.663 qpair failed and we were unable to recover it. 00:24:28.663 [2024-07-12 11:28:54.568738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.663 [2024-07-12 11:28:54.568763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.663 qpair failed and we were unable to recover it. 00:24:28.663 [2024-07-12 11:28:54.568852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.663 [2024-07-12 11:28:54.568897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.663 qpair failed and we were unable to recover it. 00:24:28.663 [2024-07-12 11:28:54.568984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.663 [2024-07-12 11:28:54.569010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.663 qpair failed and we were unable to recover it. 00:24:28.663 [2024-07-12 11:28:54.569090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.663 [2024-07-12 11:28:54.569117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.663 qpair failed and we were unable to recover it. 00:24:28.663 [2024-07-12 11:28:54.569239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.663 [2024-07-12 11:28:54.569265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.663 qpair failed and we were unable to recover it. 00:24:28.663 [2024-07-12 11:28:54.569378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.663 [2024-07-12 11:28:54.569404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.663 qpair failed and we were unable to recover it. 00:24:28.663 [2024-07-12 11:28:54.569486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.663 [2024-07-12 11:28:54.569512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.663 qpair failed and we were unable to recover it. 00:24:28.663 [2024-07-12 11:28:54.569612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.663 [2024-07-12 11:28:54.569639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.663 qpair failed and we were unable to recover it. 00:24:28.663 [2024-07-12 11:28:54.569746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.663 [2024-07-12 11:28:54.569788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.663 qpair failed and we were unable to recover it. 00:24:28.663 [2024-07-12 11:28:54.569906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.663 [2024-07-12 11:28:54.569935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.663 qpair failed and we were unable to recover it. 00:24:28.663 [2024-07-12 11:28:54.570022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.663 [2024-07-12 11:28:54.570049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.663 qpair failed and we were unable to recover it. 00:24:28.663 [2024-07-12 11:28:54.570130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.663 [2024-07-12 11:28:54.570167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.663 qpair failed and we were unable to recover it. 00:24:28.663 [2024-07-12 11:28:54.570290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.663 [2024-07-12 11:28:54.570316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.663 qpair failed and we were unable to recover it. 00:24:28.663 [2024-07-12 11:28:54.570400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.663 [2024-07-12 11:28:54.570426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.663 qpair failed and we were unable to recover it. 00:24:28.663 [2024-07-12 11:28:54.570502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.663 [2024-07-12 11:28:54.570528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.663 qpair failed and we were unable to recover it. 00:24:28.663 [2024-07-12 11:28:54.570621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.663 [2024-07-12 11:28:54.570660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.663 qpair failed and we were unable to recover it. 00:24:28.663 [2024-07-12 11:28:54.570753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.663 [2024-07-12 11:28:54.570780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.663 qpair failed and we were unable to recover it. 00:24:28.663 [2024-07-12 11:28:54.570878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.663 [2024-07-12 11:28:54.570906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.663 qpair failed and we were unable to recover it. 00:24:28.663 [2024-07-12 11:28:54.570994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.663 [2024-07-12 11:28:54.571019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.663 qpair failed and we were unable to recover it. 00:24:28.663 [2024-07-12 11:28:54.571121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.663 [2024-07-12 11:28:54.571147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.663 qpair failed and we were unable to recover it. 00:24:28.663 [2024-07-12 11:28:54.571231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.663 [2024-07-12 11:28:54.571263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.663 qpair failed and we were unable to recover it. 00:24:28.663 [2024-07-12 11:28:54.571356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.663 [2024-07-12 11:28:54.571383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.663 qpair failed and we were unable to recover it. 00:24:28.663 [2024-07-12 11:28:54.571463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.664 [2024-07-12 11:28:54.571489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.664 qpair failed and we were unable to recover it. 00:24:28.664 [2024-07-12 11:28:54.571565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.664 [2024-07-12 11:28:54.571591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.664 qpair failed and we were unable to recover it. 00:24:28.664 [2024-07-12 11:28:54.571669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.664 [2024-07-12 11:28:54.571695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.664 qpair failed and we were unable to recover it. 00:24:28.664 [2024-07-12 11:28:54.571783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.664 [2024-07-12 11:28:54.571809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.664 qpair failed and we were unable to recover it. 00:24:28.664 [2024-07-12 11:28:54.571905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.664 [2024-07-12 11:28:54.571932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.664 qpair failed and we were unable to recover it. 00:24:28.664 [2024-07-12 11:28:54.572032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.664 [2024-07-12 11:28:54.572058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.664 qpair failed and we were unable to recover it. 00:24:28.664 [2024-07-12 11:28:54.572133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.664 [2024-07-12 11:28:54.572169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.664 qpair failed and we were unable to recover it. 00:24:28.664 [2024-07-12 11:28:54.572258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.664 [2024-07-12 11:28:54.572284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.664 qpair failed and we were unable to recover it. 00:24:28.664 [2024-07-12 11:28:54.572397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.664 [2024-07-12 11:28:54.572424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.664 qpair failed and we were unable to recover it. 00:24:28.664 [2024-07-12 11:28:54.572502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.664 [2024-07-12 11:28:54.572527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.664 qpair failed and we were unable to recover it. 00:24:28.664 [2024-07-12 11:28:54.572630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.664 [2024-07-12 11:28:54.572671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.664 qpair failed and we were unable to recover it. 00:24:28.664 [2024-07-12 11:28:54.572809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.664 [2024-07-12 11:28:54.572849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.664 qpair failed and we were unable to recover it. 00:24:28.664 [2024-07-12 11:28:54.572965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.664 [2024-07-12 11:28:54.572993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.664 qpair failed and we were unable to recover it. 00:24:28.664 [2024-07-12 11:28:54.573089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.664 [2024-07-12 11:28:54.573115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.664 qpair failed and we were unable to recover it. 00:24:28.664 [2024-07-12 11:28:54.573212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.664 [2024-07-12 11:28:54.573238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.664 qpair failed and we were unable to recover it. 00:24:28.664 [2024-07-12 11:28:54.573319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.664 [2024-07-12 11:28:54.573345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.664 qpair failed and we were unable to recover it. 00:24:28.664 [2024-07-12 11:28:54.573429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.664 [2024-07-12 11:28:54.573457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.664 qpair failed and we were unable to recover it. 00:24:28.664 [2024-07-12 11:28:54.573543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.664 [2024-07-12 11:28:54.573569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.664 qpair failed and we were unable to recover it. 00:24:28.664 [2024-07-12 11:28:54.573646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.664 [2024-07-12 11:28:54.573672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.664 qpair failed and we were unable to recover it. 00:24:28.664 [2024-07-12 11:28:54.573750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.664 [2024-07-12 11:28:54.573776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.664 qpair failed and we were unable to recover it. 00:24:28.664 [2024-07-12 11:28:54.573857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.664 [2024-07-12 11:28:54.573891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.664 qpair failed and we were unable to recover it. 00:24:28.664 [2024-07-12 11:28:54.573967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.664 [2024-07-12 11:28:54.573994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.664 qpair failed and we were unable to recover it. 00:24:28.664 [2024-07-12 11:28:54.574077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.664 [2024-07-12 11:28:54.574103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.664 qpair failed and we were unable to recover it. 00:24:28.664 [2024-07-12 11:28:54.574227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.664 [2024-07-12 11:28:54.574253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.664 qpair failed and we were unable to recover it. 00:24:28.664 [2024-07-12 11:28:54.574333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.664 [2024-07-12 11:28:54.574359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.664 qpair failed and we were unable to recover it. 00:24:28.664 [2024-07-12 11:28:54.574446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.664 [2024-07-12 11:28:54.574473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.664 qpair failed and we were unable to recover it. 00:24:28.664 [2024-07-12 11:28:54.574597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.664 [2024-07-12 11:28:54.574638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.664 qpair failed and we were unable to recover it. 00:24:28.664 [2024-07-12 11:28:54.574727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.664 [2024-07-12 11:28:54.574753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.664 qpair failed and we were unable to recover it. 00:24:28.664 [2024-07-12 11:28:54.574837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.664 [2024-07-12 11:28:54.574876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.664 qpair failed and we were unable to recover it. 00:24:28.664 [2024-07-12 11:28:54.574965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.664 [2024-07-12 11:28:54.574991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.664 qpair failed and we were unable to recover it. 00:24:28.664 [2024-07-12 11:28:54.575082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.664 [2024-07-12 11:28:54.575109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.664 qpair failed and we were unable to recover it. 00:24:28.664 [2024-07-12 11:28:54.575222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.664 [2024-07-12 11:28:54.575248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.664 qpair failed and we were unable to recover it. 00:24:28.664 [2024-07-12 11:28:54.575366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.664 [2024-07-12 11:28:54.575393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.664 qpair failed and we were unable to recover it. 00:24:28.664 [2024-07-12 11:28:54.575482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.664 [2024-07-12 11:28:54.575507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.664 qpair failed and we were unable to recover it. 00:24:28.664 [2024-07-12 11:28:54.575598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.664 [2024-07-12 11:28:54.575627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.664 qpair failed and we were unable to recover it. 00:24:28.664 [2024-07-12 11:28:54.575724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.664 [2024-07-12 11:28:54.575763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.664 qpair failed and we were unable to recover it. 00:24:28.664 [2024-07-12 11:28:54.575857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.664 [2024-07-12 11:28:54.575891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.664 qpair failed and we were unable to recover it. 00:24:28.664 [2024-07-12 11:28:54.575998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.664 [2024-07-12 11:28:54.576024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.664 qpair failed and we were unable to recover it. 00:24:28.664 [2024-07-12 11:28:54.576109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.664 [2024-07-12 11:28:54.576140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.664 qpair failed and we were unable to recover it. 00:24:28.664 [2024-07-12 11:28:54.576218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.664 [2024-07-12 11:28:54.576244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.664 qpair failed and we were unable to recover it. 00:24:28.664 [2024-07-12 11:28:54.576329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.665 [2024-07-12 11:28:54.576355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.665 qpair failed and we were unable to recover it. 00:24:28.665 [2024-07-12 11:28:54.576438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.665 [2024-07-12 11:28:54.576465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.665 qpair failed and we were unable to recover it. 00:24:28.665 [2024-07-12 11:28:54.576592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.665 [2024-07-12 11:28:54.576624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.665 qpair failed and we were unable to recover it. 00:24:28.665 [2024-07-12 11:28:54.576747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.665 [2024-07-12 11:28:54.576774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.665 qpair failed and we were unable to recover it. 00:24:28.665 [2024-07-12 11:28:54.576873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.665 [2024-07-12 11:28:54.576902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.665 qpair failed and we were unable to recover it. 00:24:28.665 [2024-07-12 11:28:54.576992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.665 [2024-07-12 11:28:54.577021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.665 qpair failed and we were unable to recover it. 00:24:28.665 [2024-07-12 11:28:54.577115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.665 [2024-07-12 11:28:54.577142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.665 qpair failed and we were unable to recover it. 00:24:28.665 [2024-07-12 11:28:54.577247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.665 [2024-07-12 11:28:54.577275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.665 qpair failed and we were unable to recover it. 00:24:28.665 [2024-07-12 11:28:54.577374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.665 [2024-07-12 11:28:54.577405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.665 qpair failed and we were unable to recover it. 00:24:28.665 [2024-07-12 11:28:54.577501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.665 [2024-07-12 11:28:54.577529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.665 qpair failed and we were unable to recover it. 00:24:28.665 [2024-07-12 11:28:54.577622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.665 [2024-07-12 11:28:54.577649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.665 qpair failed and we were unable to recover it. 00:24:28.665 [2024-07-12 11:28:54.577731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.665 [2024-07-12 11:28:54.577763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.665 qpair failed and we were unable to recover it. 00:24:28.665 [2024-07-12 11:28:54.577899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.665 [2024-07-12 11:28:54.577929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.665 qpair failed and we were unable to recover it. 00:24:28.665 [2024-07-12 11:28:54.578012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.665 [2024-07-12 11:28:54.578038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.665 qpair failed and we were unable to recover it. 00:24:28.665 [2024-07-12 11:28:54.578118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.665 [2024-07-12 11:28:54.578144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.665 qpair failed and we were unable to recover it. 00:24:28.665 [2024-07-12 11:28:54.578228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.665 [2024-07-12 11:28:54.578254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.665 qpair failed and we were unable to recover it. 00:24:28.665 [2024-07-12 11:28:54.578337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.665 [2024-07-12 11:28:54.578364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.665 qpair failed and we were unable to recover it. 00:24:28.665 [2024-07-12 11:28:54.578450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.665 [2024-07-12 11:28:54.578479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.665 qpair failed and we were unable to recover it. 00:24:28.665 [2024-07-12 11:28:54.578576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.665 [2024-07-12 11:28:54.578602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.665 qpair failed and we were unable to recover it. 00:24:28.665 [2024-07-12 11:28:54.578701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.665 [2024-07-12 11:28:54.578741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.665 qpair failed and we were unable to recover it. 00:24:28.665 [2024-07-12 11:28:54.578827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.665 [2024-07-12 11:28:54.578993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.665 qpair failed and we were unable to recover it. 00:24:28.665 [2024-07-12 11:28:54.579094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.665 [2024-07-12 11:28:54.579122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.665 qpair failed and we were unable to recover it. 00:24:28.665 [2024-07-12 11:28:54.579215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.665 [2024-07-12 11:28:54.579242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.665 qpair failed and we were unable to recover it. 00:24:28.665 [2024-07-12 11:28:54.579326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.665 [2024-07-12 11:28:54.579352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.665 qpair failed and we were unable to recover it. 00:24:28.665 [2024-07-12 11:28:54.579435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.665 [2024-07-12 11:28:54.579461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.665 qpair failed and we were unable to recover it. 00:24:28.665 [2024-07-12 11:28:54.579556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.665 [2024-07-12 11:28:54.579587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.665 qpair failed and we were unable to recover it. 00:24:28.665 [2024-07-12 11:28:54.579677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.665 [2024-07-12 11:28:54.579703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.665 qpair failed and we were unable to recover it. 00:24:28.665 [2024-07-12 11:28:54.579780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.665 [2024-07-12 11:28:54.579805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.665 qpair failed and we were unable to recover it. 00:24:28.665 [2024-07-12 11:28:54.579903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.665 [2024-07-12 11:28:54.579943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.665 qpair failed and we were unable to recover it. 00:24:28.665 [2024-07-12 11:28:54.580029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.665 [2024-07-12 11:28:54.580056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.665 qpair failed and we were unable to recover it. 00:24:28.665 [2024-07-12 11:28:54.580145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.665 [2024-07-12 11:28:54.580178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.665 qpair failed and we were unable to recover it. 00:24:28.665 [2024-07-12 11:28:54.580324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.665 [2024-07-12 11:28:54.580349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.665 qpair failed and we were unable to recover it. 00:24:28.665 [2024-07-12 11:28:54.580430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.665 [2024-07-12 11:28:54.580456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.665 qpair failed and we were unable to recover it. 00:24:28.665 [2024-07-12 11:28:54.580546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.665 [2024-07-12 11:28:54.580572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.665 qpair failed and we were unable to recover it. 00:24:28.665 [2024-07-12 11:28:54.580682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.665 [2024-07-12 11:28:54.580709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.665 qpair failed and we were unable to recover it. 00:24:28.665 [2024-07-12 11:28:54.580798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.665 [2024-07-12 11:28:54.580824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.665 qpair failed and we were unable to recover it. 00:24:28.665 [2024-07-12 11:28:54.580916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.665 [2024-07-12 11:28:54.580943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.665 qpair failed and we were unable to recover it. 00:24:28.665 [2024-07-12 11:28:54.581026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.665 [2024-07-12 11:28:54.581052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.665 qpair failed and we were unable to recover it. 00:24:28.665 [2024-07-12 11:28:54.581141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.665 [2024-07-12 11:28:54.581179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.665 qpair failed and we were unable to recover it. 00:24:28.665 [2024-07-12 11:28:54.581268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.665 [2024-07-12 11:28:54.581294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.665 qpair failed and we were unable to recover it. 00:24:28.665 [2024-07-12 11:28:54.581374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.665 [2024-07-12 11:28:54.581401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.665 qpair failed and we were unable to recover it. 00:24:28.666 [2024-07-12 11:28:54.581506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.666 [2024-07-12 11:28:54.581532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.666 qpair failed and we were unable to recover it. 00:24:28.666 [2024-07-12 11:28:54.581625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.666 [2024-07-12 11:28:54.581651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.666 qpair failed and we were unable to recover it. 00:24:28.666 [2024-07-12 11:28:54.581734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.666 [2024-07-12 11:28:54.581759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.666 qpair failed and we were unable to recover it. 00:24:28.666 [2024-07-12 11:28:54.581840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.666 [2024-07-12 11:28:54.581877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.666 qpair failed and we were unable to recover it. 00:24:28.666 [2024-07-12 11:28:54.581972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.666 [2024-07-12 11:28:54.582002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.666 qpair failed and we were unable to recover it. 00:24:28.666 [2024-07-12 11:28:54.582092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.666 [2024-07-12 11:28:54.582119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.666 qpair failed and we were unable to recover it. 00:24:28.666 [2024-07-12 11:28:54.582230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.666 [2024-07-12 11:28:54.582256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.666 qpair failed and we were unable to recover it. 00:24:28.666 [2024-07-12 11:28:54.582352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.666 [2024-07-12 11:28:54.582378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.666 qpair failed and we were unable to recover it. 00:24:28.666 [2024-07-12 11:28:54.582460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.666 [2024-07-12 11:28:54.582486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.666 qpair failed and we were unable to recover it. 00:24:28.666 [2024-07-12 11:28:54.582570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.666 [2024-07-12 11:28:54.582596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.666 qpair failed and we were unable to recover it. 00:24:28.666 [2024-07-12 11:28:54.582678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.666 [2024-07-12 11:28:54.582704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.666 qpair failed and we were unable to recover it. 00:24:28.666 [2024-07-12 11:28:54.582785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.666 [2024-07-12 11:28:54.582815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.666 qpair failed and we were unable to recover it. 00:24:28.666 [2024-07-12 11:28:54.582903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.666 [2024-07-12 11:28:54.582931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.666 qpair failed and we were unable to recover it. 00:24:28.666 [2024-07-12 11:28:54.583021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.666 [2024-07-12 11:28:54.583049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.666 qpair failed and we were unable to recover it. 00:24:28.666 [2024-07-12 11:28:54.583139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.666 [2024-07-12 11:28:54.583175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.666 qpair failed and we were unable to recover it. 00:24:28.666 [2024-07-12 11:28:54.583269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.666 [2024-07-12 11:28:54.583296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.666 qpair failed and we were unable to recover it. 00:24:28.666 [2024-07-12 11:28:54.583378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.666 [2024-07-12 11:28:54.583404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.666 qpair failed and we were unable to recover it. 00:24:28.666 [2024-07-12 11:28:54.583493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.666 [2024-07-12 11:28:54.583519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.666 qpair failed and we were unable to recover it. 00:24:28.666 [2024-07-12 11:28:54.583604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.666 [2024-07-12 11:28:54.583630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.666 qpair failed and we were unable to recover it. 00:24:28.666 [2024-07-12 11:28:54.583745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.666 [2024-07-12 11:28:54.583772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.666 qpair failed and we were unable to recover it. 00:24:28.666 [2024-07-12 11:28:54.583874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.666 [2024-07-12 11:28:54.583902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.666 qpair failed and we were unable to recover it. 00:24:28.666 [2024-07-12 11:28:54.583986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.666 [2024-07-12 11:28:54.584013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.666 qpair failed and we were unable to recover it. 00:24:28.666 [2024-07-12 11:28:54.584102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.666 [2024-07-12 11:28:54.584128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.666 qpair failed and we were unable to recover it. 00:24:28.666 [2024-07-12 11:28:54.584245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.666 [2024-07-12 11:28:54.584274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.666 qpair failed and we were unable to recover it. 00:24:28.666 [2024-07-12 11:28:54.584363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.666 [2024-07-12 11:28:54.584392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.666 qpair failed and we were unable to recover it. 00:24:28.666 [2024-07-12 11:28:54.584479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.666 [2024-07-12 11:28:54.584507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.666 qpair failed and we were unable to recover it. 00:24:28.666 [2024-07-12 11:28:54.584593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.666 [2024-07-12 11:28:54.584619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.666 qpair failed and we were unable to recover it. 00:24:28.666 [2024-07-12 11:28:54.584702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.666 [2024-07-12 11:28:54.584728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.666 qpair failed and we were unable to recover it. 00:24:28.666 [2024-07-12 11:28:54.584808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.666 [2024-07-12 11:28:54.584834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.666 qpair failed and we were unable to recover it. 00:24:28.666 [2024-07-12 11:28:54.584931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.666 [2024-07-12 11:28:54.584957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.666 qpair failed and we were unable to recover it. 00:24:28.666 [2024-07-12 11:28:54.585043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.666 [2024-07-12 11:28:54.585069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.666 qpair failed and we were unable to recover it. 00:24:28.666 [2024-07-12 11:28:54.585149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.666 [2024-07-12 11:28:54.585179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.666 qpair failed and we were unable to recover it. 00:24:28.666 [2024-07-12 11:28:54.585334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.666 [2024-07-12 11:28:54.585362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.666 qpair failed and we were unable to recover it. 00:24:28.666 [2024-07-12 11:28:54.585453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.666 [2024-07-12 11:28:54.585478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.666 qpair failed and we were unable to recover it. 00:24:28.666 [2024-07-12 11:28:54.585560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.666 [2024-07-12 11:28:54.585585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.666 qpair failed and we were unable to recover it. 00:24:28.666 [2024-07-12 11:28:54.585667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.666 [2024-07-12 11:28:54.585693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.666 qpair failed and we were unable to recover it. 00:24:28.666 [2024-07-12 11:28:54.585801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.666 [2024-07-12 11:28:54.585828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.666 qpair failed and we were unable to recover it. 00:24:28.666 [2024-07-12 11:28:54.585928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.666 [2024-07-12 11:28:54.585957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.666 qpair failed and we were unable to recover it. 00:24:28.666 [2024-07-12 11:28:54.586057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.666 [2024-07-12 11:28:54.586084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.666 qpair failed and we were unable to recover it. 00:24:28.666 [2024-07-12 11:28:54.586186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.666 [2024-07-12 11:28:54.586213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.666 qpair failed and we were unable to recover it. 00:24:28.666 [2024-07-12 11:28:54.586289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.666 [2024-07-12 11:28:54.586315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.667 qpair failed and we were unable to recover it. 00:24:28.667 [2024-07-12 11:28:54.586397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.667 [2024-07-12 11:28:54.586423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.667 qpair failed and we were unable to recover it. 00:24:28.667 [2024-07-12 11:28:54.586507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.667 [2024-07-12 11:28:54.586534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.667 qpair failed and we were unable to recover it. 00:24:28.667 [2024-07-12 11:28:54.586617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.667 [2024-07-12 11:28:54.586646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.667 qpair failed and we were unable to recover it. 00:24:28.667 [2024-07-12 11:28:54.586741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.667 [2024-07-12 11:28:54.586768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.667 qpair failed and we were unable to recover it. 00:24:28.667 [2024-07-12 11:28:54.586861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.667 [2024-07-12 11:28:54.586906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.667 qpair failed and we were unable to recover it. 00:24:28.667 [2024-07-12 11:28:54.586998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.667 [2024-07-12 11:28:54.587025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.667 qpair failed and we were unable to recover it. 00:24:28.667 [2024-07-12 11:28:54.587105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.667 [2024-07-12 11:28:54.587132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.667 qpair failed and we were unable to recover it. 00:24:28.667 [2024-07-12 11:28:54.587219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.667 [2024-07-12 11:28:54.587246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.667 qpair failed and we were unable to recover it. 00:24:28.667 [2024-07-12 11:28:54.587327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.667 [2024-07-12 11:28:54.587353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.667 qpair failed and we were unable to recover it. 00:24:28.667 [2024-07-12 11:28:54.587436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.667 [2024-07-12 11:28:54.587462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.667 qpair failed and we were unable to recover it. 00:24:28.667 [2024-07-12 11:28:54.587579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.667 [2024-07-12 11:28:54.587610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.667 qpair failed and we were unable to recover it. 00:24:28.667 [2024-07-12 11:28:54.587693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.667 [2024-07-12 11:28:54.587719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.667 qpair failed and we were unable to recover it. 00:24:28.667 [2024-07-12 11:28:54.587799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.667 [2024-07-12 11:28:54.587824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.667 qpair failed and we were unable to recover it. 00:24:28.667 [2024-07-12 11:28:54.587934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.667 [2024-07-12 11:28:54.587967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.667 qpair failed and we were unable to recover it. 00:24:28.667 [2024-07-12 11:28:54.588054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.667 [2024-07-12 11:28:54.588081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.667 qpair failed and we were unable to recover it. 00:24:28.667 [2024-07-12 11:28:54.588177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.667 [2024-07-12 11:28:54.588204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.667 qpair failed and we were unable to recover it. 00:24:28.667 [2024-07-12 11:28:54.588286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.667 [2024-07-12 11:28:54.588313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.667 qpair failed and we were unable to recover it. 00:24:28.667 [2024-07-12 11:28:54.588395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.667 [2024-07-12 11:28:54.588424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.667 qpair failed and we were unable to recover it. 00:24:28.667 [2024-07-12 11:28:54.588517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.667 [2024-07-12 11:28:54.588557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.667 qpair failed and we were unable to recover it. 00:24:28.667 [2024-07-12 11:28:54.588649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.667 [2024-07-12 11:28:54.588677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.667 qpair failed and we were unable to recover it. 00:24:28.667 [2024-07-12 11:28:54.588766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.667 [2024-07-12 11:28:54.588794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.667 qpair failed and we were unable to recover it. 00:24:28.667 [2024-07-12 11:28:54.588891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.667 [2024-07-12 11:28:54.588919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.667 qpair failed and we were unable to recover it. 00:24:28.667 [2024-07-12 11:28:54.588997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.667 [2024-07-12 11:28:54.589024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.667 qpair failed and we were unable to recover it. 00:24:28.667 [2024-07-12 11:28:54.589113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.667 [2024-07-12 11:28:54.589139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.667 qpair failed and we were unable to recover it. 00:24:28.667 [2024-07-12 11:28:54.589262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.667 [2024-07-12 11:28:54.589288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.667 qpair failed and we were unable to recover it. 00:24:28.667 [2024-07-12 11:28:54.589370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.667 [2024-07-12 11:28:54.589395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.667 qpair failed and we were unable to recover it. 00:24:28.667 [2024-07-12 11:28:54.589482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.667 [2024-07-12 11:28:54.589509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.667 qpair failed and we were unable to recover it. 00:24:28.667 [2024-07-12 11:28:54.589638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.667 [2024-07-12 11:28:54.589677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.667 qpair failed and we were unable to recover it. 00:24:28.667 [2024-07-12 11:28:54.589768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.667 [2024-07-12 11:28:54.589798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.667 qpair failed and we were unable to recover it. 00:24:28.667 [2024-07-12 11:28:54.589902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.667 [2024-07-12 11:28:54.589930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.667 qpair failed and we were unable to recover it. 00:24:28.667 [2024-07-12 11:28:54.590010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.667 [2024-07-12 11:28:54.590036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.667 qpair failed and we were unable to recover it. 00:24:28.667 [2024-07-12 11:28:54.590115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.667 [2024-07-12 11:28:54.590141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.667 qpair failed and we were unable to recover it. 00:24:28.667 [2024-07-12 11:28:54.590219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.667 [2024-07-12 11:28:54.590245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.667 qpair failed and we were unable to recover it. 00:24:28.667 [2024-07-12 11:28:54.590320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.667 [2024-07-12 11:28:54.590347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.667 qpair failed and we were unable to recover it. 00:24:28.667 [2024-07-12 11:28:54.590431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.667 [2024-07-12 11:28:54.590458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.667 qpair failed and we were unable to recover it. 00:24:28.668 [2024-07-12 11:28:54.590542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.668 [2024-07-12 11:28:54.590568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.668 qpair failed and we were unable to recover it. 00:24:28.668 [2024-07-12 11:28:54.590651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.668 [2024-07-12 11:28:54.590677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.668 qpair failed and we were unable to recover it. 00:24:28.668 [2024-07-12 11:28:54.590771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.668 [2024-07-12 11:28:54.590816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.668 qpair failed and we were unable to recover it. 00:24:28.668 [2024-07-12 11:28:54.590941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.668 [2024-07-12 11:28:54.590970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.668 qpair failed and we were unable to recover it. 00:24:28.668 [2024-07-12 11:28:54.591055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.668 [2024-07-12 11:28:54.591081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.668 qpair failed and we were unable to recover it. 00:24:28.668 [2024-07-12 11:28:54.591174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.668 [2024-07-12 11:28:54.591200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.668 qpair failed and we were unable to recover it. 00:24:28.668 [2024-07-12 11:28:54.591287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.668 [2024-07-12 11:28:54.591313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.668 qpair failed and we were unable to recover it. 00:24:28.668 [2024-07-12 11:28:54.591395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.668 [2024-07-12 11:28:54.591424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.668 qpair failed and we were unable to recover it. 00:24:28.668 [2024-07-12 11:28:54.591513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.668 [2024-07-12 11:28:54.591540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.668 qpair failed and we were unable to recover it. 00:24:28.668 [2024-07-12 11:28:54.591629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.668 [2024-07-12 11:28:54.591659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.668 qpair failed and we were unable to recover it. 00:24:28.668 [2024-07-12 11:28:54.591744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.668 [2024-07-12 11:28:54.591775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.668 qpair failed and we were unable to recover it. 00:24:28.668 [2024-07-12 11:28:54.591896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.668 [2024-07-12 11:28:54.591924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.668 qpair failed and we were unable to recover it. 00:24:28.668 [2024-07-12 11:28:54.592010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.668 [2024-07-12 11:28:54.592043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.668 qpair failed and we were unable to recover it. 00:24:28.668 [2024-07-12 11:28:54.592128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.668 [2024-07-12 11:28:54.592165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.668 qpair failed and we were unable to recover it. 00:24:28.668 [2024-07-12 11:28:54.592257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.668 [2024-07-12 11:28:54.592283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.668 qpair failed and we were unable to recover it. 00:24:28.668 [2024-07-12 11:28:54.592369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.668 [2024-07-12 11:28:54.592395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.668 qpair failed and we were unable to recover it. 00:24:28.668 [2024-07-12 11:28:54.592489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.668 [2024-07-12 11:28:54.592518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.668 qpair failed and we were unable to recover it. 00:24:28.668 [2024-07-12 11:28:54.592603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.668 [2024-07-12 11:28:54.592631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.668 qpair failed and we were unable to recover it. 00:24:28.668 [2024-07-12 11:28:54.592735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.668 [2024-07-12 11:28:54.592775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.668 qpair failed and we were unable to recover it. 00:24:28.668 [2024-07-12 11:28:54.592880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.668 [2024-07-12 11:28:54.592909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.668 qpair failed and we were unable to recover it. 00:24:28.668 [2024-07-12 11:28:54.592998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.668 [2024-07-12 11:28:54.593025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.668 qpair failed and we were unable to recover it. 00:24:28.668 [2024-07-12 11:28:54.593105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.668 [2024-07-12 11:28:54.593131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.668 qpair failed and we were unable to recover it. 00:24:28.668 [2024-07-12 11:28:54.593237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.668 [2024-07-12 11:28:54.593263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.668 qpair failed and we were unable to recover it. 00:24:28.668 [2024-07-12 11:28:54.593361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.668 [2024-07-12 11:28:54.593387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.668 qpair failed and we were unable to recover it. 00:24:28.668 [2024-07-12 11:28:54.593468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.668 [2024-07-12 11:28:54.593493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.668 qpair failed and we were unable to recover it. 00:24:28.668 [2024-07-12 11:28:54.593573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.668 [2024-07-12 11:28:54.593599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.668 qpair failed and we were unable to recover it. 00:24:28.668 [2024-07-12 11:28:54.593682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.668 [2024-07-12 11:28:54.593708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.668 qpair failed and we were unable to recover it. 00:24:28.668 [2024-07-12 11:28:54.593792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.668 [2024-07-12 11:28:54.593820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.668 qpair failed and we were unable to recover it. 00:24:28.668 [2024-07-12 11:28:54.593932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.668 [2024-07-12 11:28:54.593960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.668 qpair failed and we were unable to recover it. 00:24:28.668 [2024-07-12 11:28:54.594053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.668 [2024-07-12 11:28:54.594082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.668 qpair failed and we were unable to recover it. 00:24:28.668 [2024-07-12 11:28:54.594183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.668 [2024-07-12 11:28:54.594211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.668 qpair failed and we were unable to recover it. 00:24:28.668 [2024-07-12 11:28:54.594301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.668 [2024-07-12 11:28:54.594328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.668 qpair failed and we were unable to recover it. 00:24:28.668 [2024-07-12 11:28:54.594409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.668 [2024-07-12 11:28:54.594435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.668 qpair failed and we were unable to recover it. 00:24:28.668 [2024-07-12 11:28:54.594534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.668 [2024-07-12 11:28:54.594562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.668 qpair failed and we were unable to recover it. 00:24:28.668 [2024-07-12 11:28:54.594661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.668 [2024-07-12 11:28:54.594701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.668 qpair failed and we were unable to recover it. 00:24:28.668 [2024-07-12 11:28:54.594783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.668 [2024-07-12 11:28:54.594810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.668 qpair failed and we were unable to recover it. 00:24:28.668 [2024-07-12 11:28:54.594906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.668 [2024-07-12 11:28:54.594933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.668 qpair failed and we were unable to recover it. 00:24:28.668 [2024-07-12 11:28:54.595017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.668 [2024-07-12 11:28:54.595044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.668 qpair failed and we were unable to recover it. 00:24:28.668 [2024-07-12 11:28:54.595124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.668 [2024-07-12 11:28:54.595159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.668 qpair failed and we were unable to recover it. 00:24:28.668 [2024-07-12 11:28:54.595242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.668 [2024-07-12 11:28:54.595268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.668 qpair failed and we were unable to recover it. 00:24:28.668 [2024-07-12 11:28:54.595360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.669 [2024-07-12 11:28:54.595386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.669 qpair failed and we were unable to recover it. 00:24:28.669 [2024-07-12 11:28:54.595486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.669 [2024-07-12 11:28:54.595515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.669 qpair failed and we were unable to recover it. 00:24:28.669 [2024-07-12 11:28:54.595604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.669 [2024-07-12 11:28:54.595637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.669 qpair failed and we were unable to recover it. 00:24:28.669 [2024-07-12 11:28:54.595725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.669 [2024-07-12 11:28:54.595753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.669 qpair failed and we were unable to recover it. 00:24:28.669 [2024-07-12 11:28:54.595831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.669 [2024-07-12 11:28:54.595872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.669 qpair failed and we were unable to recover it. 00:24:28.669 [2024-07-12 11:28:54.595953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.669 [2024-07-12 11:28:54.595979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.669 qpair failed and we were unable to recover it. 00:24:28.669 [2024-07-12 11:28:54.596068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.669 [2024-07-12 11:28:54.596093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.669 qpair failed and we were unable to recover it. 00:24:28.669 [2024-07-12 11:28:54.596189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.669 [2024-07-12 11:28:54.596215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.669 qpair failed and we were unable to recover it. 00:24:28.669 [2024-07-12 11:28:54.596304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.669 [2024-07-12 11:28:54.596330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.669 qpair failed and we were unable to recover it. 00:24:28.669 [2024-07-12 11:28:54.596415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.669 [2024-07-12 11:28:54.596442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.669 qpair failed and we were unable to recover it. 00:24:28.669 [2024-07-12 11:28:54.596520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.669 [2024-07-12 11:28:54.596545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.669 qpair failed and we were unable to recover it. 00:24:28.669 [2024-07-12 11:28:54.596635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.669 [2024-07-12 11:28:54.596661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.669 qpair failed and we were unable to recover it. 00:24:28.669 [2024-07-12 11:28:54.596740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.669 [2024-07-12 11:28:54.596766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.669 qpair failed and we were unable to recover it. 00:24:28.669 [2024-07-12 11:28:54.596848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.669 [2024-07-12 11:28:54.596890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.669 qpair failed and we were unable to recover it. 00:24:28.669 [2024-07-12 11:28:54.597009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.669 [2024-07-12 11:28:54.597036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.669 qpair failed and we were unable to recover it. 00:24:28.669 [2024-07-12 11:28:54.597118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.669 [2024-07-12 11:28:54.597144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.669 qpair failed and we were unable to recover it. 00:24:28.669 [2024-07-12 11:28:54.597234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.669 [2024-07-12 11:28:54.597260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.669 qpair failed and we were unable to recover it. 00:24:28.669 [2024-07-12 11:28:54.597344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.669 [2024-07-12 11:28:54.597371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.669 qpair failed and we were unable to recover it. 00:24:28.669 [2024-07-12 11:28:54.597458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.669 [2024-07-12 11:28:54.597485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.669 qpair failed and we were unable to recover it. 00:24:28.669 [2024-07-12 11:28:54.597601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.669 [2024-07-12 11:28:54.597628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.669 qpair failed and we were unable to recover it. 00:24:28.669 [2024-07-12 11:28:54.597721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.669 [2024-07-12 11:28:54.597761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.669 qpair failed and we were unable to recover it. 00:24:28.669 [2024-07-12 11:28:54.597848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.669 [2024-07-12 11:28:54.597896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.669 qpair failed and we were unable to recover it. 00:24:28.669 [2024-07-12 11:28:54.597987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.669 [2024-07-12 11:28:54.598017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.669 qpair failed and we were unable to recover it. 00:24:28.669 [2024-07-12 11:28:54.598100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.669 [2024-07-12 11:28:54.598128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.669 qpair failed and we were unable to recover it. 00:24:28.669 [2024-07-12 11:28:54.598234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.669 [2024-07-12 11:28:54.598261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.669 qpair failed and we were unable to recover it. 00:24:28.669 [2024-07-12 11:28:54.598361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.669 [2024-07-12 11:28:54.598388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.669 qpair failed and we were unable to recover it. 00:24:28.669 [2024-07-12 11:28:54.598469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.669 [2024-07-12 11:28:54.598497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.669 qpair failed and we were unable to recover it. 00:24:28.669 [2024-07-12 11:28:54.598593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.669 [2024-07-12 11:28:54.598633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.669 qpair failed and we were unable to recover it. 00:24:28.669 [2024-07-12 11:28:54.598723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.669 [2024-07-12 11:28:54.598750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.669 qpair failed and we were unable to recover it. 00:24:28.669 [2024-07-12 11:28:54.598828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.669 [2024-07-12 11:28:54.598887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.669 qpair failed and we were unable to recover it. 00:24:28.669 [2024-07-12 11:28:54.598975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.669 [2024-07-12 11:28:54.599001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.669 qpair failed and we were unable to recover it. 00:24:28.669 [2024-07-12 11:28:54.599093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.669 [2024-07-12 11:28:54.599121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.669 qpair failed and we were unable to recover it. 00:24:28.669 [2024-07-12 11:28:54.599214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.669 [2024-07-12 11:28:54.599241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.669 qpair failed and we were unable to recover it. 00:24:28.669 [2024-07-12 11:28:54.599330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.669 [2024-07-12 11:28:54.599356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.669 qpair failed and we were unable to recover it. 00:24:28.669 [2024-07-12 11:28:54.599439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.669 [2024-07-12 11:28:54.599465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.669 qpair failed and we were unable to recover it. 00:24:28.669 [2024-07-12 11:28:54.599545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.669 [2024-07-12 11:28:54.599571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.669 qpair failed and we were unable to recover it. 00:24:28.669 [2024-07-12 11:28:54.599655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.669 [2024-07-12 11:28:54.599682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.669 qpair failed and we were unable to recover it. 00:24:28.669 [2024-07-12 11:28:54.599767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.669 [2024-07-12 11:28:54.599792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.669 qpair failed and we were unable to recover it. 00:24:28.669 [2024-07-12 11:28:54.599888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.669 [2024-07-12 11:28:54.599914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.669 qpair failed and we were unable to recover it. 00:24:28.669 [2024-07-12 11:28:54.599998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.669 [2024-07-12 11:28:54.600023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.669 qpair failed and we were unable to recover it. 00:24:28.669 [2024-07-12 11:28:54.600109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.669 [2024-07-12 11:28:54.600135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.669 qpair failed and we were unable to recover it. 00:24:28.670 [2024-07-12 11:28:54.600230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.670 [2024-07-12 11:28:54.600255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.670 qpair failed and we were unable to recover it. 00:24:28.670 [2024-07-12 11:28:54.600335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.670 [2024-07-12 11:28:54.600361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.670 qpair failed and we were unable to recover it. 00:24:28.670 [2024-07-12 11:28:54.600450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.670 [2024-07-12 11:28:54.600476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.670 qpair failed and we were unable to recover it. 00:24:28.670 [2024-07-12 11:28:54.600552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.670 [2024-07-12 11:28:54.600578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.670 qpair failed and we were unable to recover it. 00:24:28.670 [2024-07-12 11:28:54.600664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.670 [2024-07-12 11:28:54.600692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.670 qpair failed and we were unable to recover it. 00:24:28.670 [2024-07-12 11:28:54.600808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.670 [2024-07-12 11:28:54.600838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.670 qpair failed and we were unable to recover it. 00:24:28.670 [2024-07-12 11:28:54.600942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.670 [2024-07-12 11:28:54.600970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.670 qpair failed and we were unable to recover it. 00:24:28.670 [2024-07-12 11:28:54.601055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.670 [2024-07-12 11:28:54.601081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.670 qpair failed and we were unable to recover it. 00:24:28.670 [2024-07-12 11:28:54.601168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.670 [2024-07-12 11:28:54.601195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.670 qpair failed and we were unable to recover it. 00:24:28.670 [2024-07-12 11:28:54.601316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.670 [2024-07-12 11:28:54.601342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.670 qpair failed and we were unable to recover it. 00:24:28.670 [2024-07-12 11:28:54.601430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.670 [2024-07-12 11:28:54.601456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.670 qpair failed and we were unable to recover it. 00:24:28.670 [2024-07-12 11:28:54.601538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.670 [2024-07-12 11:28:54.601567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.670 qpair failed and we were unable to recover it. 00:24:28.670 [2024-07-12 11:28:54.601664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.670 [2024-07-12 11:28:54.601705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.670 qpair failed and we were unable to recover it. 00:24:28.670 [2024-07-12 11:28:54.601806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.670 [2024-07-12 11:28:54.601834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.670 qpair failed and we were unable to recover it. 00:24:28.670 [2024-07-12 11:28:54.601939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.670 [2024-07-12 11:28:54.601967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.670 qpair failed and we were unable to recover it. 00:24:28.670 [2024-07-12 11:28:54.602055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.670 [2024-07-12 11:28:54.602086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.670 qpair failed and we were unable to recover it. 00:24:28.670 [2024-07-12 11:28:54.602224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.670 [2024-07-12 11:28:54.602251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.670 qpair failed and we were unable to recover it. 00:24:28.670 [2024-07-12 11:28:54.602336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.670 [2024-07-12 11:28:54.602362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.670 qpair failed and we were unable to recover it. 00:24:28.670 [2024-07-12 11:28:54.602441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.670 [2024-07-12 11:28:54.602467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.670 qpair failed and we were unable to recover it. 00:24:28.670 [2024-07-12 11:28:54.602591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.670 [2024-07-12 11:28:54.602621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.670 qpair failed and we were unable to recover it. 00:24:28.670 [2024-07-12 11:28:54.602714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.670 [2024-07-12 11:28:54.602741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.670 qpair failed and we were unable to recover it. 00:24:28.670 [2024-07-12 11:28:54.602825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.670 [2024-07-12 11:28:54.602871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.670 qpair failed and we were unable to recover it. 00:24:28.670 [2024-07-12 11:28:54.602959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.670 [2024-07-12 11:28:54.602986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.670 qpair failed and we were unable to recover it. 00:24:28.670 [2024-07-12 11:28:54.603071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.670 [2024-07-12 11:28:54.603097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.670 qpair failed and we were unable to recover it. 00:24:28.670 [2024-07-12 11:28:54.603187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.670 [2024-07-12 11:28:54.603213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.670 qpair failed and we were unable to recover it. 00:24:28.670 [2024-07-12 11:28:54.603292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.670 [2024-07-12 11:28:54.603319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.670 qpair failed and we were unable to recover it. 00:24:28.670 [2024-07-12 11:28:54.603416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.670 [2024-07-12 11:28:54.603445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.670 qpair failed and we were unable to recover it. 00:24:28.670 [2024-07-12 11:28:54.603534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.670 [2024-07-12 11:28:54.603562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.670 qpair failed and we were unable to recover it. 00:24:28.670 [2024-07-12 11:28:54.603651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.670 [2024-07-12 11:28:54.603676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.670 qpair failed and we were unable to recover it. 00:24:28.670 [2024-07-12 11:28:54.603763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.670 [2024-07-12 11:28:54.603789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.670 qpair failed and we were unable to recover it. 00:24:28.670 [2024-07-12 11:28:54.603882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.670 [2024-07-12 11:28:54.603909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.670 qpair failed and we were unable to recover it. 00:24:28.670 [2024-07-12 11:28:54.603992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.670 [2024-07-12 11:28:54.604017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.670 qpair failed and we were unable to recover it. 00:24:28.670 [2024-07-12 11:28:54.604100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.670 [2024-07-12 11:28:54.604126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.670 qpair failed and we were unable to recover it. 00:24:28.670 [2024-07-12 11:28:54.604218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.670 [2024-07-12 11:28:54.604244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.670 qpair failed and we were unable to recover it. 00:24:28.670 [2024-07-12 11:28:54.604320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.670 [2024-07-12 11:28:54.604346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.670 qpair failed and we were unable to recover it. 00:24:28.670 [2024-07-12 11:28:54.604429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.670 [2024-07-12 11:28:54.604457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.670 qpair failed and we were unable to recover it. 00:24:28.670 [2024-07-12 11:28:54.604567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.670 [2024-07-12 11:28:54.604593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.670 qpair failed and we were unable to recover it. 00:24:28.670 [2024-07-12 11:28:54.604671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.670 [2024-07-12 11:28:54.604697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.670 qpair failed and we were unable to recover it. 00:24:28.670 [2024-07-12 11:28:54.604781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.670 [2024-07-12 11:28:54.604807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.670 qpair failed and we were unable to recover it. 00:24:28.670 [2024-07-12 11:28:54.604908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.670 [2024-07-12 11:28:54.604948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.670 qpair failed and we were unable to recover it. 00:24:28.670 [2024-07-12 11:28:54.605042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.670 [2024-07-12 11:28:54.605069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.671 qpair failed and we were unable to recover it. 00:24:28.671 [2024-07-12 11:28:54.605164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.671 [2024-07-12 11:28:54.605190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.671 qpair failed and we were unable to recover it. 00:24:28.671 [2024-07-12 11:28:54.605307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.671 [2024-07-12 11:28:54.605335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.671 qpair failed and we were unable to recover it. 00:24:28.671 [2024-07-12 11:28:54.605419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.671 [2024-07-12 11:28:54.605445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.671 qpair failed and we were unable to recover it. 00:24:28.671 [2024-07-12 11:28:54.605544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.671 [2024-07-12 11:28:54.605571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.671 qpair failed and we were unable to recover it. 00:24:28.671 [2024-07-12 11:28:54.605652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.671 [2024-07-12 11:28:54.605678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.671 qpair failed and we were unable to recover it. 00:24:28.671 [2024-07-12 11:28:54.605756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.671 [2024-07-12 11:28:54.605783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.671 qpair failed and we were unable to recover it. 00:24:28.671 [2024-07-12 11:28:54.605883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.671 [2024-07-12 11:28:54.605912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.671 qpair failed and we were unable to recover it. 00:24:28.671 [2024-07-12 11:28:54.605997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.671 [2024-07-12 11:28:54.606023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.671 qpair failed and we were unable to recover it. 00:24:28.671 [2024-07-12 11:28:54.606110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.671 [2024-07-12 11:28:54.606136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.671 qpair failed and we were unable to recover it. 00:24:28.671 [2024-07-12 11:28:54.606223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.671 [2024-07-12 11:28:54.606250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.671 qpair failed and we were unable to recover it. 00:24:28.671 [2024-07-12 11:28:54.606337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.671 [2024-07-12 11:28:54.606363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.671 qpair failed and we were unable to recover it. 00:24:28.671 [2024-07-12 11:28:54.606450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.671 [2024-07-12 11:28:54.606475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.671 qpair failed and we were unable to recover it. 00:24:28.671 [2024-07-12 11:28:54.606561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.671 [2024-07-12 11:28:54.606589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.671 qpair failed and we were unable to recover it. 00:24:28.671 [2024-07-12 11:28:54.606697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.671 [2024-07-12 11:28:54.606737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.671 qpair failed and we were unable to recover it. 00:24:28.671 [2024-07-12 11:28:54.606837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.671 [2024-07-12 11:28:54.606875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.671 qpair failed and we were unable to recover it. 00:24:28.671 [2024-07-12 11:28:54.606976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.671 [2024-07-12 11:28:54.607003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.671 qpair failed and we were unable to recover it. 00:24:28.671 [2024-07-12 11:28:54.607121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.671 [2024-07-12 11:28:54.607149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.671 qpair failed and we were unable to recover it. 00:24:28.671 [2024-07-12 11:28:54.607237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.671 [2024-07-12 11:28:54.607267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.671 qpair failed and we were unable to recover it. 00:24:28.671 [2024-07-12 11:28:54.607352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.671 [2024-07-12 11:28:54.607379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.671 qpair failed and we were unable to recover it. 00:24:28.671 [2024-07-12 11:28:54.607461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.671 [2024-07-12 11:28:54.607488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.671 qpair failed and we were unable to recover it. 00:24:28.671 [2024-07-12 11:28:54.607583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.671 [2024-07-12 11:28:54.607615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.671 qpair failed and we were unable to recover it. 00:24:28.671 [2024-07-12 11:28:54.607705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.671 [2024-07-12 11:28:54.607733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.671 qpair failed and we were unable to recover it. 00:24:28.671 [2024-07-12 11:28:54.607824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.671 [2024-07-12 11:28:54.607850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.671 qpair failed and we were unable to recover it. 00:24:28.671 [2024-07-12 11:28:54.607960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.671 [2024-07-12 11:28:54.607987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.671 qpair failed and we were unable to recover it. 00:24:28.671 [2024-07-12 11:28:54.608075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.671 [2024-07-12 11:28:54.608100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.671 qpair failed and we were unable to recover it. 00:24:28.671 [2024-07-12 11:28:54.608181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.671 [2024-07-12 11:28:54.608206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.671 qpair failed and we were unable to recover it. 00:24:28.671 [2024-07-12 11:28:54.608287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.671 [2024-07-12 11:28:54.608316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.671 qpair failed and we were unable to recover it. 00:24:28.671 [2024-07-12 11:28:54.608424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.671 [2024-07-12 11:28:54.608450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.671 qpair failed and we were unable to recover it. 00:24:28.671 [2024-07-12 11:28:54.608561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.671 [2024-07-12 11:28:54.608588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.671 qpair failed and we were unable to recover it. 00:24:28.671 [2024-07-12 11:28:54.608671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.671 [2024-07-12 11:28:54.608697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.671 qpair failed and we were unable to recover it. 00:24:28.671 [2024-07-12 11:28:54.608784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.671 [2024-07-12 11:28:54.608813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.671 qpair failed and we were unable to recover it. 00:24:28.671 [2024-07-12 11:28:54.608903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.671 [2024-07-12 11:28:54.608929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.671 qpair failed and we were unable to recover it. 00:24:28.671 [2024-07-12 11:28:54.609019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.671 [2024-07-12 11:28:54.609048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.671 qpair failed and we were unable to recover it. 00:24:28.671 [2024-07-12 11:28:54.609131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.671 [2024-07-12 11:28:54.609157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.671 qpair failed and we were unable to recover it. 00:24:28.671 [2024-07-12 11:28:54.609278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.671 [2024-07-12 11:28:54.609304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.671 qpair failed and we were unable to recover it. 00:24:28.671 [2024-07-12 11:28:54.609387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.671 [2024-07-12 11:28:54.609412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.671 qpair failed and we were unable to recover it. 00:24:28.671 [2024-07-12 11:28:54.609498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.671 [2024-07-12 11:28:54.609528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.671 qpair failed and we were unable to recover it. 00:24:28.671 [2024-07-12 11:28:54.609637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.671 [2024-07-12 11:28:54.609663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.671 qpair failed and we were unable to recover it. 00:24:28.671 [2024-07-12 11:28:54.609753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.671 [2024-07-12 11:28:54.609793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.671 qpair failed and we were unable to recover it. 00:24:28.671 [2024-07-12 11:28:54.609882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.671 [2024-07-12 11:28:54.609910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.671 qpair failed and we were unable to recover it. 00:24:28.672 [2024-07-12 11:28:54.609997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.672 [2024-07-12 11:28:54.610022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.672 qpair failed and we were unable to recover it. 00:24:28.672 [2024-07-12 11:28:54.610097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.672 [2024-07-12 11:28:54.610128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.672 qpair failed and we were unable to recover it. 00:24:28.672 [2024-07-12 11:28:54.610219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.672 [2024-07-12 11:28:54.610245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.672 qpair failed and we were unable to recover it. 00:24:28.672 [2024-07-12 11:28:54.610329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.672 [2024-07-12 11:28:54.610354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.672 qpair failed and we were unable to recover it. 00:24:28.672 [2024-07-12 11:28:54.610465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.672 [2024-07-12 11:28:54.610492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.672 qpair failed and we were unable to recover it. 00:24:28.672 [2024-07-12 11:28:54.610582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.672 [2024-07-12 11:28:54.610623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.672 qpair failed and we were unable to recover it. 00:24:28.672 [2024-07-12 11:28:54.610709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.672 [2024-07-12 11:28:54.610735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.672 qpair failed and we were unable to recover it. 00:24:28.672 [2024-07-12 11:28:54.610822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.672 [2024-07-12 11:28:54.610848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.672 qpair failed and we were unable to recover it. 00:24:28.672 [2024-07-12 11:28:54.610952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.672 [2024-07-12 11:28:54.610979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.672 qpair failed and we were unable to recover it. 00:24:28.672 [2024-07-12 11:28:54.611061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.672 [2024-07-12 11:28:54.611087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.672 qpair failed and we were unable to recover it. 00:24:28.672 [2024-07-12 11:28:54.611171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.672 [2024-07-12 11:28:54.611197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.672 qpair failed and we were unable to recover it. 00:24:28.672 [2024-07-12 11:28:54.611282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.672 [2024-07-12 11:28:54.611308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.672 qpair failed and we were unable to recover it. 00:24:28.672 [2024-07-12 11:28:54.611401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.672 [2024-07-12 11:28:54.611427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.672 qpair failed and we were unable to recover it. 00:24:28.672 [2024-07-12 11:28:54.611506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.672 [2024-07-12 11:28:54.611532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.672 qpair failed and we were unable to recover it. 00:24:28.672 [2024-07-12 11:28:54.611634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.672 [2024-07-12 11:28:54.611661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.672 qpair failed and we were unable to recover it. 00:24:28.672 [2024-07-12 11:28:54.611745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.672 [2024-07-12 11:28:54.611771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.672 qpair failed and we were unable to recover it. 00:24:28.672 [2024-07-12 11:28:54.611850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.672 [2024-07-12 11:28:54.611896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.672 qpair failed and we were unable to recover it. 00:24:28.672 [2024-07-12 11:28:54.611976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.672 [2024-07-12 11:28:54.612003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.672 qpair failed and we were unable to recover it. 00:24:28.672 [2024-07-12 11:28:54.612091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.672 [2024-07-12 11:28:54.612118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.672 qpair failed and we were unable to recover it. 00:24:28.672 [2024-07-12 11:28:54.612195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.672 [2024-07-12 11:28:54.612220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.672 qpair failed and we were unable to recover it. 00:24:28.672 [2024-07-12 11:28:54.612309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.672 [2024-07-12 11:28:54.612335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.672 qpair failed and we were unable to recover it. 00:24:28.672 [2024-07-12 11:28:54.612447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.672 [2024-07-12 11:28:54.612473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.672 qpair failed and we were unable to recover it. 00:24:28.672 [2024-07-12 11:28:54.612556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.672 [2024-07-12 11:28:54.612583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.672 qpair failed and we were unable to recover it. 00:24:28.672 [2024-07-12 11:28:54.612676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.672 [2024-07-12 11:28:54.612716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.672 qpair failed and we were unable to recover it. 00:24:28.672 [2024-07-12 11:28:54.612819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.672 [2024-07-12 11:28:54.612849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.672 qpair failed and we were unable to recover it. 00:24:28.672 [2024-07-12 11:28:54.612966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.672 [2024-07-12 11:28:54.612992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.672 qpair failed and we were unable to recover it. 00:24:28.672 [2024-07-12 11:28:54.613077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.672 [2024-07-12 11:28:54.613105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.672 qpair failed and we were unable to recover it. 00:24:28.672 [2024-07-12 11:28:54.613194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.672 [2024-07-12 11:28:54.613220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.672 qpair failed and we were unable to recover it. 00:24:28.672 [2024-07-12 11:28:54.613304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.672 [2024-07-12 11:28:54.613335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.672 qpair failed and we were unable to recover it. 00:24:28.672 [2024-07-12 11:28:54.613428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.672 [2024-07-12 11:28:54.613456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.672 qpair failed and we were unable to recover it. 00:24:28.672 [2024-07-12 11:28:54.613547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.672 [2024-07-12 11:28:54.613577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.672 qpair failed and we were unable to recover it. 00:24:28.672 [2024-07-12 11:28:54.613671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.672 [2024-07-12 11:28:54.613698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.672 qpair failed and we were unable to recover it. 00:24:28.672 [2024-07-12 11:28:54.613778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.672 [2024-07-12 11:28:54.613804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.672 qpair failed and we were unable to recover it. 00:24:28.672 [2024-07-12 11:28:54.613885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.672 [2024-07-12 11:28:54.613913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.672 qpair failed and we were unable to recover it. 00:24:28.672 [2024-07-12 11:28:54.614008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.672 [2024-07-12 11:28:54.614034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.672 qpair failed and we were unable to recover it. 00:24:28.673 [2024-07-12 11:28:54.614205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.673 [2024-07-12 11:28:54.614232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.673 qpair failed and we were unable to recover it. 00:24:28.673 [2024-07-12 11:28:54.614344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.673 [2024-07-12 11:28:54.614370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.673 qpair failed and we were unable to recover it. 00:24:28.673 [2024-07-12 11:28:54.614452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.673 [2024-07-12 11:28:54.614478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.673 qpair failed and we were unable to recover it. 00:24:28.673 [2024-07-12 11:28:54.614564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.673 [2024-07-12 11:28:54.614590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.673 qpair failed and we were unable to recover it. 00:24:28.673 [2024-07-12 11:28:54.614673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.673 [2024-07-12 11:28:54.614699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.673 qpair failed and we were unable to recover it. 00:24:28.673 [2024-07-12 11:28:54.614777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.673 [2024-07-12 11:28:54.614803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.673 qpair failed and we were unable to recover it. 00:24:28.673 [2024-07-12 11:28:54.614883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.673 [2024-07-12 11:28:54.614910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.673 qpair failed and we were unable to recover it. 00:24:28.673 [2024-07-12 11:28:54.615007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.673 [2024-07-12 11:28:54.615033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.673 qpair failed and we were unable to recover it. 00:24:28.673 [2024-07-12 11:28:54.615117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.673 [2024-07-12 11:28:54.615143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.673 qpair failed and we were unable to recover it. 00:24:28.673 [2024-07-12 11:28:54.615251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.673 [2024-07-12 11:28:54.615277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.673 qpair failed and we were unable to recover it. 00:24:28.673 [2024-07-12 11:28:54.615363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.673 [2024-07-12 11:28:54.615391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.673 qpair failed and we were unable to recover it. 00:24:28.673 [2024-07-12 11:28:54.615483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.673 [2024-07-12 11:28:54.615522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.673 qpair failed and we were unable to recover it. 00:24:28.673 [2024-07-12 11:28:54.615616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.673 [2024-07-12 11:28:54.615646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.673 qpair failed and we were unable to recover it. 00:24:28.673 [2024-07-12 11:28:54.615744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.673 [2024-07-12 11:28:54.615783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.673 qpair failed and we were unable to recover it. 00:24:28.673 [2024-07-12 11:28:54.615892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.673 [2024-07-12 11:28:54.615920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.673 qpair failed and we were unable to recover it. 00:24:28.673 [2024-07-12 11:28:54.615998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.673 [2024-07-12 11:28:54.616024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.673 qpair failed and we were unable to recover it. 00:24:28.673 [2024-07-12 11:28:54.616103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.673 [2024-07-12 11:28:54.616129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.673 qpair failed and we were unable to recover it. 00:24:28.673 [2024-07-12 11:28:54.616203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.673 [2024-07-12 11:28:54.616229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.673 qpair failed and we were unable to recover it. 00:24:28.673 [2024-07-12 11:28:54.616319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.673 [2024-07-12 11:28:54.616350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.673 qpair failed and we were unable to recover it. 00:24:28.673 [2024-07-12 11:28:54.616445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.673 [2024-07-12 11:28:54.616472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.673 qpair failed and we were unable to recover it. 00:24:28.673 [2024-07-12 11:28:54.616588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.673 [2024-07-12 11:28:54.616616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.673 qpair failed and we were unable to recover it. 00:24:28.673 [2024-07-12 11:28:54.616704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.673 [2024-07-12 11:28:54.616731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.673 qpair failed and we were unable to recover it. 00:24:28.673 [2024-07-12 11:28:54.616813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.673 [2024-07-12 11:28:54.616838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.673 qpair failed and we were unable to recover it. 00:24:28.673 [2024-07-12 11:28:54.616922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.673 [2024-07-12 11:28:54.616949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.673 qpair failed and we were unable to recover it. 00:24:28.673 [2024-07-12 11:28:54.617039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.673 [2024-07-12 11:28:54.617067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.673 qpair failed and we were unable to recover it. 00:24:28.673 [2024-07-12 11:28:54.617157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.673 [2024-07-12 11:28:54.617183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.673 qpair failed and we were unable to recover it. 00:24:28.673 [2024-07-12 11:28:54.617271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.673 [2024-07-12 11:28:54.617298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.673 qpair failed and we were unable to recover it. 00:24:28.673 [2024-07-12 11:28:54.617387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.673 [2024-07-12 11:28:54.617414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.673 qpair failed and we were unable to recover it. 00:24:28.673 [2024-07-12 11:28:54.617497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.673 [2024-07-12 11:28:54.617523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.673 qpair failed and we were unable to recover it. 00:24:28.673 [2024-07-12 11:28:54.617619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.673 [2024-07-12 11:28:54.617645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.673 qpair failed and we were unable to recover it. 00:24:28.673 [2024-07-12 11:28:54.617732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.673 [2024-07-12 11:28:54.617758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.673 qpair failed and we were unable to recover it. 00:24:28.673 [2024-07-12 11:28:54.617842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.673 [2024-07-12 11:28:54.617876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.673 qpair failed and we were unable to recover it. 00:24:28.673 [2024-07-12 11:28:54.617962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.673 [2024-07-12 11:28:54.617988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.673 qpair failed and we were unable to recover it. 00:24:28.673 [2024-07-12 11:28:54.618068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.673 [2024-07-12 11:28:54.618094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.673 qpair failed and we were unable to recover it. 00:24:28.673 [2024-07-12 11:28:54.618182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.673 [2024-07-12 11:28:54.618209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.673 qpair failed and we were unable to recover it. 00:24:28.673 [2024-07-12 11:28:54.618289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.673 [2024-07-12 11:28:54.618317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.673 qpair failed and we were unable to recover it. 00:24:28.673 [2024-07-12 11:28:54.618395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.673 [2024-07-12 11:28:54.618422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.673 qpair failed and we were unable to recover it. 00:24:28.673 [2024-07-12 11:28:54.618506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.673 [2024-07-12 11:28:54.618532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.673 qpair failed and we were unable to recover it. 00:24:28.673 [2024-07-12 11:28:54.618638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.673 [2024-07-12 11:28:54.618664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.673 qpair failed and we were unable to recover it. 00:24:28.673 [2024-07-12 11:28:54.618759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.673 [2024-07-12 11:28:54.618788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.673 qpair failed and we were unable to recover it. 00:24:28.673 [2024-07-12 11:28:54.618875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.673 [2024-07-12 11:28:54.618902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.674 qpair failed and we were unable to recover it. 00:24:28.674 [2024-07-12 11:28:54.618986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.674 [2024-07-12 11:28:54.619014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.674 qpair failed and we were unable to recover it. 00:24:28.674 [2024-07-12 11:28:54.619098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.674 [2024-07-12 11:28:54.619124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.674 qpair failed and we were unable to recover it. 00:24:28.674 [2024-07-12 11:28:54.619209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.674 [2024-07-12 11:28:54.619235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.674 qpair failed and we were unable to recover it. 00:24:28.674 [2024-07-12 11:28:54.619318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.674 [2024-07-12 11:28:54.619345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.674 qpair failed and we were unable to recover it. 00:24:28.674 [2024-07-12 11:28:54.619428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.674 [2024-07-12 11:28:54.619454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.674 qpair failed and we were unable to recover it. 00:24:28.674 [2024-07-12 11:28:54.619538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.674 [2024-07-12 11:28:54.619563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.674 qpair failed and we were unable to recover it. 00:24:28.674 [2024-07-12 11:28:54.619643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.674 [2024-07-12 11:28:54.619668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.674 qpair failed and we were unable to recover it. 00:24:28.674 [2024-07-12 11:28:54.619750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.674 [2024-07-12 11:28:54.619777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.674 qpair failed and we were unable to recover it. 00:24:28.674 [2024-07-12 11:28:54.619858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.674 [2024-07-12 11:28:54.619890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.674 qpair failed and we were unable to recover it. 00:24:28.674 [2024-07-12 11:28:54.619969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.674 [2024-07-12 11:28:54.619995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.674 qpair failed and we were unable to recover it. 00:24:28.674 [2024-07-12 11:28:54.620077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.674 [2024-07-12 11:28:54.620103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.674 qpair failed and we were unable to recover it. 00:24:28.674 [2024-07-12 11:28:54.620189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.674 [2024-07-12 11:28:54.620215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.674 qpair failed and we were unable to recover it. 00:24:28.674 [2024-07-12 11:28:54.620297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.674 [2024-07-12 11:28:54.620323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.674 qpair failed and we were unable to recover it. 00:24:28.674 [2024-07-12 11:28:54.620408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.674 [2024-07-12 11:28:54.620436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.674 qpair failed and we were unable to recover it. 00:24:28.674 [2024-07-12 11:28:54.620536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.674 [2024-07-12 11:28:54.620562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.674 qpair failed and we were unable to recover it. 00:24:28.674 [2024-07-12 11:28:54.620649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.674 [2024-07-12 11:28:54.620676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.674 qpair failed and we were unable to recover it. 00:24:28.674 [2024-07-12 11:28:54.620775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.674 [2024-07-12 11:28:54.620801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.674 qpair failed and we were unable to recover it. 00:24:28.674 [2024-07-12 11:28:54.620886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.674 [2024-07-12 11:28:54.620914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.674 qpair failed and we were unable to recover it. 00:24:28.674 [2024-07-12 11:28:54.621000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.674 [2024-07-12 11:28:54.621025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.674 qpair failed and we were unable to recover it. 00:24:28.674 [2024-07-12 11:28:54.621113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.674 [2024-07-12 11:28:54.621144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.674 qpair failed and we were unable to recover it. 00:24:28.674 [2024-07-12 11:28:54.621229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.674 [2024-07-12 11:28:54.621255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.674 qpair failed and we were unable to recover it. 00:24:28.674 [2024-07-12 11:28:54.621357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.674 [2024-07-12 11:28:54.621397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.674 qpair failed and we were unable to recover it. 00:24:28.674 [2024-07-12 11:28:54.621490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.674 [2024-07-12 11:28:54.621519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.674 qpair failed and we were unable to recover it. 00:24:28.674 [2024-07-12 11:28:54.621607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.674 [2024-07-12 11:28:54.621633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.674 qpair failed and we were unable to recover it. 00:24:28.674 [2024-07-12 11:28:54.621711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.674 [2024-07-12 11:28:54.621738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.674 qpair failed and we were unable to recover it. 00:24:28.674 [2024-07-12 11:28:54.621831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.674 [2024-07-12 11:28:54.621858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.674 qpair failed and we were unable to recover it. 00:24:28.674 [2024-07-12 11:28:54.621954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.674 [2024-07-12 11:28:54.621980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.674 qpair failed and we were unable to recover it. 00:24:28.674 [2024-07-12 11:28:54.622077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.674 [2024-07-12 11:28:54.622106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.674 qpair failed and we were unable to recover it. 00:24:28.674 [2024-07-12 11:28:54.622189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.674 [2024-07-12 11:28:54.622215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.674 qpair failed and we were unable to recover it. 00:24:28.674 [2024-07-12 11:28:54.622321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.674 [2024-07-12 11:28:54.622348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.674 qpair failed and we were unable to recover it. 00:24:28.674 [2024-07-12 11:28:54.622432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.674 [2024-07-12 11:28:54.622457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.674 qpair failed and we were unable to recover it. 00:24:28.674 [2024-07-12 11:28:54.622556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.674 [2024-07-12 11:28:54.622584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.674 qpair failed and we were unable to recover it. 00:24:28.674 [2024-07-12 11:28:54.622667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.674 [2024-07-12 11:28:54.622693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.674 qpair failed and we were unable to recover it. 00:24:28.674 [2024-07-12 11:28:54.622810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.674 [2024-07-12 11:28:54.622837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.674 qpair failed and we were unable to recover it. 00:24:28.674 [2024-07-12 11:28:54.622933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.674 [2024-07-12 11:28:54.622959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.674 qpair failed and we were unable to recover it. 00:24:28.674 [2024-07-12 11:28:54.623037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.674 [2024-07-12 11:28:54.623063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.674 qpair failed and we were unable to recover it. 00:24:28.674 [2024-07-12 11:28:54.623146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.674 [2024-07-12 11:28:54.623172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.674 qpair failed and we were unable to recover it. 00:24:28.674 [2024-07-12 11:28:54.623255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.674 [2024-07-12 11:28:54.623280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.674 qpair failed and we were unable to recover it. 00:24:28.674 [2024-07-12 11:28:54.623368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.674 [2024-07-12 11:28:54.623393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.674 qpair failed and we were unable to recover it. 00:24:28.674 [2024-07-12 11:28:54.623475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.674 [2024-07-12 11:28:54.623501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.674 qpair failed and we were unable to recover it. 00:24:28.674 [2024-07-12 11:28:54.623583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.675 [2024-07-12 11:28:54.623612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.675 qpair failed and we were unable to recover it. 00:24:28.675 [2024-07-12 11:28:54.623688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.675 [2024-07-12 11:28:54.623714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.675 qpair failed and we were unable to recover it. 00:24:28.675 [2024-07-12 11:28:54.623798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.675 [2024-07-12 11:28:54.623825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.675 qpair failed and we were unable to recover it. 00:24:28.675 [2024-07-12 11:28:54.623913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.675 [2024-07-12 11:28:54.623940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.675 qpair failed and we were unable to recover it. 00:24:28.675 [2024-07-12 11:28:54.624016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.675 [2024-07-12 11:28:54.624042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.675 qpair failed and we were unable to recover it. 00:24:28.675 [2024-07-12 11:28:54.624124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.675 [2024-07-12 11:28:54.624151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.675 qpair failed and we were unable to recover it. 00:24:28.675 [2024-07-12 11:28:54.624233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.675 [2024-07-12 11:28:54.624270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.675 qpair failed and we were unable to recover it. 00:24:28.675 [2024-07-12 11:28:54.624355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.675 [2024-07-12 11:28:54.624381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.675 qpair failed and we were unable to recover it. 00:24:28.675 [2024-07-12 11:28:54.624465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.675 [2024-07-12 11:28:54.624494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.675 qpair failed and we were unable to recover it. 00:24:28.675 [2024-07-12 11:28:54.624586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.675 [2024-07-12 11:28:54.624613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.675 qpair failed and we were unable to recover it. 00:24:28.675 [2024-07-12 11:28:54.624693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.675 [2024-07-12 11:28:54.624719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.675 qpair failed and we were unable to recover it. 00:24:28.675 [2024-07-12 11:28:54.624802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.675 [2024-07-12 11:28:54.624829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.675 qpair failed and we were unable to recover it. 00:24:28.675 [2024-07-12 11:28:54.624918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.675 [2024-07-12 11:28:54.624945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.675 qpair failed and we were unable to recover it. 00:24:28.675 [2024-07-12 11:28:54.625020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.675 [2024-07-12 11:28:54.625046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.675 qpair failed and we were unable to recover it. 00:24:28.675 [2024-07-12 11:28:54.625137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.675 [2024-07-12 11:28:54.625163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.675 qpair failed and we were unable to recover it. 00:24:28.675 [2024-07-12 11:28:54.625248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.675 [2024-07-12 11:28:54.625274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.675 qpair failed and we were unable to recover it. 00:24:28.675 [2024-07-12 11:28:54.625358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.675 [2024-07-12 11:28:54.625384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.675 qpair failed and we were unable to recover it. 00:24:28.675 [2024-07-12 11:28:54.625479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.675 [2024-07-12 11:28:54.625507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.675 qpair failed and we were unable to recover it. 00:24:28.675 [2024-07-12 11:28:54.625592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.675 [2024-07-12 11:28:54.625618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.675 qpair failed and we were unable to recover it. 00:24:28.675 [2024-07-12 11:28:54.625702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.675 [2024-07-12 11:28:54.625728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.675 qpair failed and we were unable to recover it. 00:24:28.675 [2024-07-12 11:28:54.625818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.675 [2024-07-12 11:28:54.625844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.675 qpair failed and we were unable to recover it. 00:24:28.675 [2024-07-12 11:28:54.625934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.675 [2024-07-12 11:28:54.625963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.675 qpair failed and we were unable to recover it. 00:24:28.675 [2024-07-12 11:28:54.626051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.675 [2024-07-12 11:28:54.626077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.675 qpair failed and we were unable to recover it. 00:24:28.675 [2024-07-12 11:28:54.626164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.675 [2024-07-12 11:28:54.626191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.675 qpair failed and we were unable to recover it. 00:24:28.675 [2024-07-12 11:28:54.626307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.675 [2024-07-12 11:28:54.626333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.675 qpair failed and we were unable to recover it. 00:24:28.675 [2024-07-12 11:28:54.626419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.675 [2024-07-12 11:28:54.626446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.675 qpair failed and we were unable to recover it. 00:24:28.675 [2024-07-12 11:28:54.626533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.675 [2024-07-12 11:28:54.626560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.675 qpair failed and we were unable to recover it. 00:24:28.675 [2024-07-12 11:28:54.626646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.675 [2024-07-12 11:28:54.626673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.675 qpair failed and we were unable to recover it. 00:24:28.675 [2024-07-12 11:28:54.626787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.675 [2024-07-12 11:28:54.626813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.675 qpair failed and we were unable to recover it. 00:24:28.675 [2024-07-12 11:28:54.626901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.675 [2024-07-12 11:28:54.626928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.675 qpair failed and we were unable to recover it. 00:24:28.675 [2024-07-12 11:28:54.627010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.675 [2024-07-12 11:28:54.627036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.675 qpair failed and we were unable to recover it. 00:24:28.675 [2024-07-12 11:28:54.627113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.675 [2024-07-12 11:28:54.627139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.675 qpair failed and we were unable to recover it. 00:24:28.675 [2024-07-12 11:28:54.627220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.675 [2024-07-12 11:28:54.627249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.675 qpair failed and we were unable to recover it. 00:24:28.675 [2024-07-12 11:28:54.627343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.675 [2024-07-12 11:28:54.627371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.675 qpair failed and we were unable to recover it. 00:24:28.675 [2024-07-12 11:28:54.627466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.675 [2024-07-12 11:28:54.627491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.675 qpair failed and we were unable to recover it. 00:24:28.675 [2024-07-12 11:28:54.627569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.675 [2024-07-12 11:28:54.627595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.675 qpair failed and we were unable to recover it. 00:24:28.675 [2024-07-12 11:28:54.627670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.675 [2024-07-12 11:28:54.627697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.675 qpair failed and we were unable to recover it. 00:24:28.675 [2024-07-12 11:28:54.627778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.675 [2024-07-12 11:28:54.627804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.675 qpair failed and we were unable to recover it. 00:24:28.675 [2024-07-12 11:28:54.627892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.675 [2024-07-12 11:28:54.627919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.675 qpair failed and we were unable to recover it. 00:24:28.675 [2024-07-12 11:28:54.628004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.675 [2024-07-12 11:28:54.628032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.675 qpair failed and we were unable to recover it. 00:24:28.675 [2024-07-12 11:28:54.628120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.675 [2024-07-12 11:28:54.628145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.675 qpair failed and we were unable to recover it. 00:24:28.675 [2024-07-12 11:28:54.628233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.676 [2024-07-12 11:28:54.628260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.676 qpair failed and we were unable to recover it. 00:24:28.676 [2024-07-12 11:28:54.628344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.676 [2024-07-12 11:28:54.628371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.676 qpair failed and we were unable to recover it. 00:24:28.676 [2024-07-12 11:28:54.628446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.676 [2024-07-12 11:28:54.628472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.676 qpair failed and we were unable to recover it. 00:24:28.676 [2024-07-12 11:28:54.628589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.676 [2024-07-12 11:28:54.628616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.676 qpair failed and we were unable to recover it. 00:24:28.676 [2024-07-12 11:28:54.628697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.676 [2024-07-12 11:28:54.628724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.676 qpair failed and we were unable to recover it. 00:24:28.676 [2024-07-12 11:28:54.628807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.676 [2024-07-12 11:28:54.628838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.676 qpair failed and we were unable to recover it. 00:24:28.676 [2024-07-12 11:28:54.628946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.676 [2024-07-12 11:28:54.628973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.676 qpair failed and we were unable to recover it. 00:24:28.676 [2024-07-12 11:28:54.629060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.676 [2024-07-12 11:28:54.629086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.676 qpair failed and we were unable to recover it. 00:24:28.676 [2024-07-12 11:28:54.629167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.676 [2024-07-12 11:28:54.629194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.676 qpair failed and we were unable to recover it. 00:24:28.676 [2024-07-12 11:28:54.629277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.676 [2024-07-12 11:28:54.629303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.676 qpair failed and we were unable to recover it. 00:24:28.676 [2024-07-12 11:28:54.629384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.676 [2024-07-12 11:28:54.629410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.676 qpair failed and we were unable to recover it. 00:24:28.676 [2024-07-12 11:28:54.629502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.676 [2024-07-12 11:28:54.629529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.676 qpair failed and we were unable to recover it. 00:24:28.676 [2024-07-12 11:28:54.629607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.676 [2024-07-12 11:28:54.629633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.676 qpair failed and we were unable to recover it. 00:24:28.676 [2024-07-12 11:28:54.629725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.676 [2024-07-12 11:28:54.629765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.676 qpair failed and we were unable to recover it. 00:24:28.676 [2024-07-12 11:28:54.629863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.676 [2024-07-12 11:28:54.629895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.676 qpair failed and we were unable to recover it. 00:24:28.676 [2024-07-12 11:28:54.629983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.676 [2024-07-12 11:28:54.630009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.676 qpair failed and we were unable to recover it. 00:24:28.676 [2024-07-12 11:28:54.630098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.676 [2024-07-12 11:28:54.630124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.676 qpair failed and we were unable to recover it. 00:24:28.676 [2024-07-12 11:28:54.630204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.676 [2024-07-12 11:28:54.630230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.676 qpair failed and we were unable to recover it. 00:24:28.676 [2024-07-12 11:28:54.630309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.676 [2024-07-12 11:28:54.630335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.676 qpair failed and we were unable to recover it. 00:24:28.676 [2024-07-12 11:28:54.630428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.676 [2024-07-12 11:28:54.630456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.676 qpair failed and we were unable to recover it. 00:24:28.676 [2024-07-12 11:28:54.630543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.676 [2024-07-12 11:28:54.630571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.676 qpair failed and we were unable to recover it. 00:24:28.676 [2024-07-12 11:28:54.630660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.676 [2024-07-12 11:28:54.630688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.676 qpair failed and we were unable to recover it. 00:24:28.676 [2024-07-12 11:28:54.630813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.676 [2024-07-12 11:28:54.630840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.676 qpair failed and we were unable to recover it. 00:24:28.676 [2024-07-12 11:28:54.630931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.676 [2024-07-12 11:28:54.630958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.676 qpair failed and we were unable to recover it. 00:24:28.676 [2024-07-12 11:28:54.631034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.676 [2024-07-12 11:28:54.631060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.676 qpair failed and we were unable to recover it. 00:24:28.676 [2024-07-12 11:28:54.631160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.676 [2024-07-12 11:28:54.631189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.676 qpair failed and we were unable to recover it. 00:24:28.676 [2024-07-12 11:28:54.631274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.676 [2024-07-12 11:28:54.631300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.676 qpair failed and we were unable to recover it. 00:24:28.676 [2024-07-12 11:28:54.631390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.676 [2024-07-12 11:28:54.631418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.676 qpair failed and we were unable to recover it. 00:24:28.676 [2024-07-12 11:28:54.631513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.676 [2024-07-12 11:28:54.631541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.676 qpair failed and we were unable to recover it. 00:24:28.676 [2024-07-12 11:28:54.631638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.676 [2024-07-12 11:28:54.631678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.676 qpair failed and we were unable to recover it. 00:24:28.676 [2024-07-12 11:28:54.631803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.676 [2024-07-12 11:28:54.631830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.676 qpair failed and we were unable to recover it. 00:24:28.676 [2024-07-12 11:28:54.631918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.676 [2024-07-12 11:28:54.631946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.676 qpair failed and we were unable to recover it. 00:24:28.676 [2024-07-12 11:28:54.632025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.676 [2024-07-12 11:28:54.632056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.676 qpair failed and we were unable to recover it. 00:24:28.676 [2024-07-12 11:28:54.632137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.676 [2024-07-12 11:28:54.632163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.676 qpair failed and we were unable to recover it. 00:24:28.676 [2024-07-12 11:28:54.632243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.676 [2024-07-12 11:28:54.632269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.676 qpair failed and we were unable to recover it. 00:24:28.676 [2024-07-12 11:28:54.632348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.677 [2024-07-12 11:28:54.632374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.677 qpair failed and we were unable to recover it. 00:24:28.677 [2024-07-12 11:28:54.632448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.677 [2024-07-12 11:28:54.632474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.677 qpair failed and we were unable to recover it. 00:24:28.677 [2024-07-12 11:28:54.632568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.677 [2024-07-12 11:28:54.632609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.677 qpair failed and we were unable to recover it. 00:24:28.677 [2024-07-12 11:28:54.632722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.677 [2024-07-12 11:28:54.632762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.677 qpair failed and we were unable to recover it. 00:24:28.677 [2024-07-12 11:28:54.632882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.677 [2024-07-12 11:28:54.632911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.677 qpair failed and we were unable to recover it. 00:24:28.677 [2024-07-12 11:28:54.633005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.677 [2024-07-12 11:28:54.633031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.677 qpair failed and we were unable to recover it. 00:24:28.677 [2024-07-12 11:28:54.633107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.677 [2024-07-12 11:28:54.633135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.677 qpair failed and we were unable to recover it. 00:24:28.677 [2024-07-12 11:28:54.633224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.677 [2024-07-12 11:28:54.633251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.677 qpair failed and we were unable to recover it. 00:24:28.677 [2024-07-12 11:28:54.633331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.677 [2024-07-12 11:28:54.633357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.677 qpair failed and we were unable to recover it. 00:24:28.677 [2024-07-12 11:28:54.633446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.677 [2024-07-12 11:28:54.633473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.677 qpair failed and we were unable to recover it. 00:24:28.677 [2024-07-12 11:28:54.633572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.677 [2024-07-12 11:28:54.633603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.677 qpair failed and we were unable to recover it. 00:24:28.677 [2024-07-12 11:28:54.633693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.677 [2024-07-12 11:28:54.633720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.677 qpair failed and we were unable to recover it. 00:24:28.677 [2024-07-12 11:28:54.633806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.677 [2024-07-12 11:28:54.633834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.677 qpair failed and we were unable to recover it. 00:24:28.677 [2024-07-12 11:28:54.633958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.677 [2024-07-12 11:28:54.633984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.677 qpair failed and we were unable to recover it. 00:24:28.677 [2024-07-12 11:28:54.634069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.677 [2024-07-12 11:28:54.634095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.677 qpair failed and we were unable to recover it. 00:24:28.677 [2024-07-12 11:28:54.634177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.677 [2024-07-12 11:28:54.634202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.677 qpair failed and we were unable to recover it. 00:24:28.677 [2024-07-12 11:28:54.634292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.677 [2024-07-12 11:28:54.634320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.677 qpair failed and we were unable to recover it. 00:24:28.677 [2024-07-12 11:28:54.634432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.677 [2024-07-12 11:28:54.634458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.677 qpair failed and we were unable to recover it. 00:24:28.677 [2024-07-12 11:28:54.634578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.677 [2024-07-12 11:28:54.634606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.677 qpair failed and we were unable to recover it. 00:24:28.677 [2024-07-12 11:28:54.634689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.677 [2024-07-12 11:28:54.634715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.677 qpair failed and we were unable to recover it. 00:24:28.677 [2024-07-12 11:28:54.634795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.677 [2024-07-12 11:28:54.634824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.677 qpair failed and we were unable to recover it. 00:24:28.677 [2024-07-12 11:28:54.634915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.677 [2024-07-12 11:28:54.634942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.677 qpair failed and we were unable to recover it. 00:24:28.677 [2024-07-12 11:28:54.635022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.677 [2024-07-12 11:28:54.635048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.677 qpair failed and we were unable to recover it. 00:24:28.677 [2024-07-12 11:28:54.635126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.677 [2024-07-12 11:28:54.635152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.677 qpair failed and we were unable to recover it. 00:24:28.677 [2024-07-12 11:28:54.635242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.677 [2024-07-12 11:28:54.635271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.677 qpair failed and we were unable to recover it. 00:24:28.677 [2024-07-12 11:28:54.635382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.677 [2024-07-12 11:28:54.635409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.677 qpair failed and we were unable to recover it. 00:24:28.677 [2024-07-12 11:28:54.635496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.677 [2024-07-12 11:28:54.635522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.677 qpair failed and we were unable to recover it. 00:24:28.677 [2024-07-12 11:28:54.635604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.677 [2024-07-12 11:28:54.635630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.677 qpair failed and we were unable to recover it. 00:24:28.677 [2024-07-12 11:28:54.635716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.677 [2024-07-12 11:28:54.635742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.677 qpair failed and we were unable to recover it. 00:24:28.677 [2024-07-12 11:28:54.635816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.677 [2024-07-12 11:28:54.635841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.677 qpair failed and we were unable to recover it. 00:24:28.677 [2024-07-12 11:28:54.635938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.677 [2024-07-12 11:28:54.635966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.677 qpair failed and we were unable to recover it. 00:24:28.677 [2024-07-12 11:28:54.636078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.677 [2024-07-12 11:28:54.636105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.677 qpair failed and we were unable to recover it. 00:24:28.677 [2024-07-12 11:28:54.636198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.677 [2024-07-12 11:28:54.636225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.677 qpair failed and we were unable to recover it. 00:24:28.677 [2024-07-12 11:28:54.636311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.677 [2024-07-12 11:28:54.636338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.677 qpair failed and we were unable to recover it. 00:24:28.677 [2024-07-12 11:28:54.636424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.677 [2024-07-12 11:28:54.636450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.677 qpair failed and we were unable to recover it. 00:24:28.677 [2024-07-12 11:28:54.636569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.677 [2024-07-12 11:28:54.636597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.677 qpair failed and we were unable to recover it. 00:24:28.677 [2024-07-12 11:28:54.636692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.677 [2024-07-12 11:28:54.636719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.677 qpair failed and we were unable to recover it. 00:24:28.677 [2024-07-12 11:28:54.636798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.677 [2024-07-12 11:28:54.636824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.677 qpair failed and we were unable to recover it. 00:24:28.677 [2024-07-12 11:28:54.636928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.677 [2024-07-12 11:28:54.636956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.677 qpair failed and we were unable to recover it. 00:24:28.677 [2024-07-12 11:28:54.637033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.677 [2024-07-12 11:28:54.637059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.677 qpair failed and we were unable to recover it. 00:24:28.677 [2024-07-12 11:28:54.637169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.677 [2024-07-12 11:28:54.637195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.677 qpair failed and we were unable to recover it. 00:24:28.677 [2024-07-12 11:28:54.637280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.637308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.637395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.637421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.637504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.637530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.637642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.637668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.637755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.637781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.637858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.637889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.637970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.637996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.638108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.638133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.638215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.638242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.638329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.638357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.638441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.638470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.638555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.638582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.638663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.638690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.638808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.638834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.638926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.638955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.639036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.639062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.639137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.639163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.639248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.639274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.639354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.639380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.639460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.639486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.639571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.639596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.639673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.639699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.639780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.639807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.639918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.639948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.640024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.640050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.640137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.640163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.640240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.640266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.640349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.640378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.640460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.640487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.640581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.640611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.640742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.640769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.640861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.640900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.640983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.641010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.641094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.641121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.641213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.641239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.641327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.641357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.641442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.641469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.641561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.641590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.641676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.641701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.641785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.641812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.641904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.641934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.642028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.642054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.642138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.678 [2024-07-12 11:28:54.642163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.678 qpair failed and we were unable to recover it. 00:24:28.678 [2024-07-12 11:28:54.642248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.642280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.642359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.642385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.642498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.642525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.642636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.642664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.642754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.642779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.642863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.642895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.643011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.643037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.643119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.643154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.643245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.643270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.643359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.643387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.643461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.643487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.643566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.643592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.643671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.643697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.643777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.643804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.643889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.643915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.644002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.644028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.644109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.644135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.644216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.644243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.644324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.644350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.644461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.644486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.644570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.644599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.644720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.644747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.644824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.644851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.644943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.644970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.645056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.645085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.645169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.645195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.645307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.645334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.645427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.645452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.645531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.645557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.645635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.645661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.645759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.645785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.645882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.645921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.646009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.646036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.646124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.646150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.646250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.646277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.646363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.646391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.646503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.646529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.646607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.646633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.646743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.646769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.646850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.646881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.646964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.646990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.647072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.647099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.647183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.647209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.647322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.679 [2024-07-12 11:28:54.647351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.679 qpair failed and we were unable to recover it. 00:24:28.679 [2024-07-12 11:28:54.647433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.680 [2024-07-12 11:28:54.647458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.680 qpair failed and we were unable to recover it. 00:24:28.680 [2024-07-12 11:28:54.647543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.680 [2024-07-12 11:28:54.647569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.680 qpair failed and we were unable to recover it. 00:24:28.680 [2024-07-12 11:28:54.647651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.680 [2024-07-12 11:28:54.647676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.680 qpair failed and we were unable to recover it. 00:24:28.680 [2024-07-12 11:28:54.647765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.680 [2024-07-12 11:28:54.647810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.680 qpair failed and we were unable to recover it. 00:24:28.680 [2024-07-12 11:28:54.647940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.680 [2024-07-12 11:28:54.647970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.680 qpair failed and we were unable to recover it. 00:24:28.680 [2024-07-12 11:28:54.648067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.680 [2024-07-12 11:28:54.648095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.680 qpair failed and we were unable to recover it. 00:24:28.680 [2024-07-12 11:28:54.648177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.680 [2024-07-12 11:28:54.648203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.680 qpair failed and we were unable to recover it. 00:24:28.680 [2024-07-12 11:28:54.648305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.680 [2024-07-12 11:28:54.648331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.680 qpair failed and we were unable to recover it. 00:24:28.680 [2024-07-12 11:28:54.648414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.680 [2024-07-12 11:28:54.648440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.680 qpair failed and we were unable to recover it. 00:24:28.680 [2024-07-12 11:28:54.648550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.680 [2024-07-12 11:28:54.648576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.680 qpair failed and we were unable to recover it. 00:24:28.680 [2024-07-12 11:28:54.648665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.680 [2024-07-12 11:28:54.648705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.680 qpair failed and we were unable to recover it. 00:24:28.680 [2024-07-12 11:28:54.648794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.680 [2024-07-12 11:28:54.648822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.680 qpair failed and we were unable to recover it. 00:24:28.680 [2024-07-12 11:28:54.648914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.680 [2024-07-12 11:28:54.648943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.680 qpair failed and we were unable to recover it. 00:24:28.680 [2024-07-12 11:28:54.649051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.680 [2024-07-12 11:28:54.649077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.680 qpair failed and we were unable to recover it. 00:24:28.680 [2024-07-12 11:28:54.649171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.680 [2024-07-12 11:28:54.649198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.680 qpair failed and we were unable to recover it. 00:24:28.680 [2024-07-12 11:28:54.649275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.680 [2024-07-12 11:28:54.649300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.680 qpair failed and we were unable to recover it. 00:24:28.680 [2024-07-12 11:28:54.649381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.680 [2024-07-12 11:28:54.649408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.680 qpair failed and we were unable to recover it. 00:24:28.680 [2024-07-12 11:28:54.649532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.680 [2024-07-12 11:28:54.649561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.680 qpair failed and we were unable to recover it. 00:24:28.680 [2024-07-12 11:28:54.649664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.680 [2024-07-12 11:28:54.649704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.680 qpair failed and we were unable to recover it. 00:24:28.680 [2024-07-12 11:28:54.649785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.680 [2024-07-12 11:28:54.649813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.680 qpair failed and we were unable to recover it. 00:24:28.680 [2024-07-12 11:28:54.649895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.680 [2024-07-12 11:28:54.649921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.680 qpair failed and we were unable to recover it. 00:24:28.680 [2024-07-12 11:28:54.650008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.680 [2024-07-12 11:28:54.650034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.680 qpair failed and we were unable to recover it. 00:24:28.680 [2024-07-12 11:28:54.650122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.680 [2024-07-12 11:28:54.650149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.680 qpair failed and we were unable to recover it. 00:24:28.680 [2024-07-12 11:28:54.650236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.680 [2024-07-12 11:28:54.650262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.680 qpair failed and we were unable to recover it. 00:24:28.680 [2024-07-12 11:28:54.650337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.680 [2024-07-12 11:28:54.650362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.680 qpair failed and we were unable to recover it. 00:24:28.680 [2024-07-12 11:28:54.650446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.680 [2024-07-12 11:28:54.650474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.680 qpair failed and we were unable to recover it. 00:24:28.680 [2024-07-12 11:28:54.650560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.680 [2024-07-12 11:28:54.650589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.680 qpair failed and we were unable to recover it. 00:24:28.680 [2024-07-12 11:28:54.650681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.680 [2024-07-12 11:28:54.650710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.680 qpair failed and we were unable to recover it. 00:24:28.680 [2024-07-12 11:28:54.650794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.680 [2024-07-12 11:28:54.650821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.680 qpair failed and we were unable to recover it. 00:24:28.680 [2024-07-12 11:28:54.650920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.680 [2024-07-12 11:28:54.650951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.680 qpair failed and we were unable to recover it. 00:24:28.680 [2024-07-12 11:28:54.651054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.680 [2024-07-12 11:28:54.651082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.680 qpair failed and we were unable to recover it. 00:24:28.680 [2024-07-12 11:28:54.651164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.680 [2024-07-12 11:28:54.651191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.680 qpair failed and we were unable to recover it. 00:24:28.680 [2024-07-12 11:28:54.651279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.680 [2024-07-12 11:28:54.651307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.680 qpair failed and we were unable to recover it. 00:24:28.680 [2024-07-12 11:28:54.651387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.680 [2024-07-12 11:28:54.651414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.680 qpair failed and we were unable to recover it. 00:24:28.680 [2024-07-12 11:28:54.651525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.680 [2024-07-12 11:28:54.651552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.680 qpair failed and we were unable to recover it. 00:24:28.680 [2024-07-12 11:28:54.651645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.680 [2024-07-12 11:28:54.651685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.680 qpair failed and we were unable to recover it. 00:24:28.680 [2024-07-12 11:28:54.651804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.680 [2024-07-12 11:28:54.651831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.680 qpair failed and we were unable to recover it. 00:24:28.680 [2024-07-12 11:28:54.651922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.680 [2024-07-12 11:28:54.651951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.680 qpair failed and we were unable to recover it. 00:24:28.680 [2024-07-12 11:28:54.652043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.680 [2024-07-12 11:28:54.652069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.680 qpair failed and we were unable to recover it. 00:24:28.680 [2024-07-12 11:28:54.652150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.681 [2024-07-12 11:28:54.652177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.681 qpair failed and we were unable to recover it. 00:24:28.681 [2024-07-12 11:28:54.652288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.681 [2024-07-12 11:28:54.652314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.681 qpair failed and we were unable to recover it. 00:24:28.681 [2024-07-12 11:28:54.652393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.681 [2024-07-12 11:28:54.652419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.681 qpair failed and we were unable to recover it. 00:24:28.681 [2024-07-12 11:28:54.652527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.681 [2024-07-12 11:28:54.652553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.681 qpair failed and we were unable to recover it. 00:24:28.681 [2024-07-12 11:28:54.652628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.681 [2024-07-12 11:28:54.652659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.681 qpair failed and we were unable to recover it. 00:24:28.681 [2024-07-12 11:28:54.652741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.681 [2024-07-12 11:28:54.652767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.681 qpair failed and we were unable to recover it. 00:24:28.681 [2024-07-12 11:28:54.652846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.681 [2024-07-12 11:28:54.652880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.681 qpair failed and we were unable to recover it. 00:24:28.681 [2024-07-12 11:28:54.652970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.681 [2024-07-12 11:28:54.652997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.681 qpair failed and we were unable to recover it. 00:24:28.681 [2024-07-12 11:28:54.653097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.681 [2024-07-12 11:28:54.653123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.681 qpair failed and we were unable to recover it. 00:24:28.681 [2024-07-12 11:28:54.653200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.681 [2024-07-12 11:28:54.653226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.681 qpair failed and we were unable to recover it. 00:24:28.681 [2024-07-12 11:28:54.653313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.681 [2024-07-12 11:28:54.653339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.681 qpair failed and we were unable to recover it. 00:24:28.681 [2024-07-12 11:28:54.653452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.681 [2024-07-12 11:28:54.653478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.681 qpair failed and we were unable to recover it. 00:24:28.681 [2024-07-12 11:28:54.653587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.681 [2024-07-12 11:28:54.653613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.681 qpair failed and we were unable to recover it. 00:24:28.681 [2024-07-12 11:28:54.653703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.681 [2024-07-12 11:28:54.653729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.681 qpair failed and we were unable to recover it. 00:24:28.681 [2024-07-12 11:28:54.653808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.681 [2024-07-12 11:28:54.653834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.681 qpair failed and we were unable to recover it. 00:24:28.681 [2024-07-12 11:28:54.653930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.681 [2024-07-12 11:28:54.653956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.681 qpair failed and we were unable to recover it. 00:24:28.681 [2024-07-12 11:28:54.654042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.681 [2024-07-12 11:28:54.654068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.681 qpair failed and we were unable to recover it. 00:24:28.681 [2024-07-12 11:28:54.654148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.681 [2024-07-12 11:28:54.654175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.681 qpair failed and we were unable to recover it. 00:24:28.681 [2024-07-12 11:28:54.654262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.681 [2024-07-12 11:28:54.654288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.681 qpair failed and we were unable to recover it. 00:24:28.681 [2024-07-12 11:28:54.654377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.681 [2024-07-12 11:28:54.654404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.681 qpair failed and we were unable to recover it. 00:24:28.681 [2024-07-12 11:28:54.654511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.681 [2024-07-12 11:28:54.654537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.681 qpair failed and we were unable to recover it. 00:24:28.681 [2024-07-12 11:28:54.654618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.681 [2024-07-12 11:28:54.654645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.681 qpair failed and we were unable to recover it. 00:24:28.681 [2024-07-12 11:28:54.654731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.681 [2024-07-12 11:28:54.654758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.681 qpair failed and we were unable to recover it. 00:24:28.681 [2024-07-12 11:28:54.654843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.681 [2024-07-12 11:28:54.654877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.681 qpair failed and we were unable to recover it. 00:24:28.681 [2024-07-12 11:28:54.654966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.681 [2024-07-12 11:28:54.654994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.681 qpair failed and we were unable to recover it. 00:24:28.681 [2024-07-12 11:28:54.655082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.681 [2024-07-12 11:28:54.655108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.681 qpair failed and we were unable to recover it. 00:24:28.681 [2024-07-12 11:28:54.655189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.681 [2024-07-12 11:28:54.655215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.681 qpair failed and we were unable to recover it. 00:24:28.681 [2024-07-12 11:28:54.655308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.681 [2024-07-12 11:28:54.655334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.681 qpair failed and we were unable to recover it. 00:24:28.681 [2024-07-12 11:28:54.655415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.681 [2024-07-12 11:28:54.655441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.681 qpair failed and we were unable to recover it. 00:24:28.681 [2024-07-12 11:28:54.655559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.681 [2024-07-12 11:28:54.655585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.681 qpair failed and we were unable to recover it. 00:24:28.682 [2024-07-12 11:28:54.655671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.682 [2024-07-12 11:28:54.655700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.682 qpair failed and we were unable to recover it. 00:24:28.682 [2024-07-12 11:28:54.655799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.682 [2024-07-12 11:28:54.655839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.682 qpair failed and we were unable to recover it. 00:24:28.682 [2024-07-12 11:28:54.655935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.682 [2024-07-12 11:28:54.655963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.682 qpair failed and we were unable to recover it. 00:24:28.682 [2024-07-12 11:28:54.656052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.682 [2024-07-12 11:28:54.656078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.682 qpair failed and we were unable to recover it. 00:24:28.682 [2024-07-12 11:28:54.656213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.682 [2024-07-12 11:28:54.656239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.682 qpair failed and we were unable to recover it. 00:24:28.682 [2024-07-12 11:28:54.656320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.682 [2024-07-12 11:28:54.656346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.682 qpair failed and we were unable to recover it. 00:24:28.682 [2024-07-12 11:28:54.656430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.682 [2024-07-12 11:28:54.656458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.682 qpair failed and we were unable to recover it. 00:24:28.682 [2024-07-12 11:28:54.656540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.682 [2024-07-12 11:28:54.656566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.682 qpair failed and we were unable to recover it. 00:24:28.682 [2024-07-12 11:28:54.656658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.682 [2024-07-12 11:28:54.656685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.682 qpair failed and we were unable to recover it. 00:24:28.682 [2024-07-12 11:28:54.656773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.682 [2024-07-12 11:28:54.656800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.682 qpair failed and we were unable to recover it. 00:24:28.682 [2024-07-12 11:28:54.656883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.682 [2024-07-12 11:28:54.656911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.682 qpair failed and we were unable to recover it. 00:24:28.682 [2024-07-12 11:28:54.656999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.682 [2024-07-12 11:28:54.657025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.682 qpair failed and we were unable to recover it. 00:24:28.682 [2024-07-12 11:28:54.657109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.682 [2024-07-12 11:28:54.657135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.682 qpair failed and we were unable to recover it. 00:24:28.682 [2024-07-12 11:28:54.657221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.682 [2024-07-12 11:28:54.657248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.682 qpair failed and we were unable to recover it. 00:24:28.682 [2024-07-12 11:28:54.657326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.682 [2024-07-12 11:28:54.657352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.682 qpair failed and we were unable to recover it. 00:24:28.682 [2024-07-12 11:28:54.657451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.682 [2024-07-12 11:28:54.657477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.682 qpair failed and we were unable to recover it. 00:24:28.682 [2024-07-12 11:28:54.657562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.682 [2024-07-12 11:28:54.657591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.682 qpair failed and we were unable to recover it. 00:24:28.682 [2024-07-12 11:28:54.657676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.682 [2024-07-12 11:28:54.657702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.682 qpair failed and we were unable to recover it. 00:24:28.682 [2024-07-12 11:28:54.657784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.682 [2024-07-12 11:28:54.657814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.682 qpair failed and we were unable to recover it. 00:24:28.682 [2024-07-12 11:28:54.657908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.682 [2024-07-12 11:28:54.657935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.682 qpair failed and we were unable to recover it. 00:24:28.682 [2024-07-12 11:28:54.658023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.682 [2024-07-12 11:28:54.658049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.682 qpair failed and we were unable to recover it. 00:24:28.682 [2024-07-12 11:28:54.658126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.682 [2024-07-12 11:28:54.658152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.682 qpair failed and we were unable to recover it. 00:24:28.682 [2024-07-12 11:28:54.658231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.682 [2024-07-12 11:28:54.658257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.682 qpair failed and we were unable to recover it. 00:24:28.682 [2024-07-12 11:28:54.658370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.682 [2024-07-12 11:28:54.658396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.682 qpair failed and we were unable to recover it. 00:24:28.682 [2024-07-12 11:28:54.658476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.682 [2024-07-12 11:28:54.658502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.682 qpair failed and we were unable to recover it. 00:24:28.682 [2024-07-12 11:28:54.658584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.682 [2024-07-12 11:28:54.658612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.682 qpair failed and we were unable to recover it. 00:24:28.682 [2024-07-12 11:28:54.658705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.682 [2024-07-12 11:28:54.658731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.682 qpair failed and we were unable to recover it. 00:24:28.682 [2024-07-12 11:28:54.658818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.682 [2024-07-12 11:28:54.658847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.682 qpair failed and we were unable to recover it. 00:24:28.682 [2024-07-12 11:28:54.658932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.682 [2024-07-12 11:28:54.658959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.682 qpair failed and we were unable to recover it. 00:24:28.682 [2024-07-12 11:28:54.659042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.682 [2024-07-12 11:28:54.659068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.682 qpair failed and we were unable to recover it. 00:24:28.682 [2024-07-12 11:28:54.659153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.682 [2024-07-12 11:28:54.659178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.682 qpair failed and we were unable to recover it. 00:24:28.682 [2024-07-12 11:28:54.659253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.682 [2024-07-12 11:28:54.659278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.682 qpair failed and we were unable to recover it. 00:24:28.682 [2024-07-12 11:28:54.659391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.682 [2024-07-12 11:28:54.659417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.682 qpair failed and we were unable to recover it. 00:24:28.682 [2024-07-12 11:28:54.659505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.682 [2024-07-12 11:28:54.659533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.682 qpair failed and we were unable to recover it. 00:24:28.682 [2024-07-12 11:28:54.659616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.682 [2024-07-12 11:28:54.659643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.682 qpair failed and we were unable to recover it. 00:24:28.682 [2024-07-12 11:28:54.659732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.682 [2024-07-12 11:28:54.659758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.682 qpair failed and we were unable to recover it. 00:24:28.682 [2024-07-12 11:28:54.659837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.682 [2024-07-12 11:28:54.659864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.682 qpair failed and we were unable to recover it. 00:24:28.682 [2024-07-12 11:28:54.659962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.682 [2024-07-12 11:28:54.659988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.682 qpair failed and we were unable to recover it. 00:24:28.682 [2024-07-12 11:28:54.660084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.682 [2024-07-12 11:28:54.660124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.682 qpair failed and we were unable to recover it. 00:24:28.682 [2024-07-12 11:28:54.660210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.682 [2024-07-12 11:28:54.660237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.682 qpair failed and we were unable to recover it. 00:24:28.682 [2024-07-12 11:28:54.660321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.682 [2024-07-12 11:28:54.660347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.682 qpair failed and we were unable to recover it. 00:24:28.683 [2024-07-12 11:28:54.660436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.683 [2024-07-12 11:28:54.660463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.683 qpair failed and we were unable to recover it. 00:24:28.683 [2024-07-12 11:28:54.660551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.683 [2024-07-12 11:28:54.660578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.683 qpair failed and we were unable to recover it. 00:24:28.683 [2024-07-12 11:28:54.660661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.683 [2024-07-12 11:28:54.660687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.683 qpair failed and we were unable to recover it. 00:24:28.683 [2024-07-12 11:28:54.660771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.683 [2024-07-12 11:28:54.660798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.683 qpair failed and we were unable to recover it. 00:24:28.683 [2024-07-12 11:28:54.660893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.683 [2024-07-12 11:28:54.660920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.683 qpair failed and we were unable to recover it. 00:24:28.683 [2024-07-12 11:28:54.660999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.683 [2024-07-12 11:28:54.661025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.683 qpair failed and we were unable to recover it. 00:24:28.683 [2024-07-12 11:28:54.661116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.683 [2024-07-12 11:28:54.661143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.683 qpair failed and we were unable to recover it. 00:24:28.683 [2024-07-12 11:28:54.661225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.683 [2024-07-12 11:28:54.661250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.683 qpair failed and we were unable to recover it. 00:24:28.683 [2024-07-12 11:28:54.661332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.683 [2024-07-12 11:28:54.661358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.683 qpair failed and we were unable to recover it. 00:24:28.683 [2024-07-12 11:28:54.661441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.683 [2024-07-12 11:28:54.661467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.683 qpair failed and we were unable to recover it. 00:24:28.683 [2024-07-12 11:28:54.661603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.683 [2024-07-12 11:28:54.661643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.683 qpair failed and we were unable to recover it. 00:24:28.683 [2024-07-12 11:28:54.661730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.683 [2024-07-12 11:28:54.661756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.683 qpair failed and we were unable to recover it. 00:24:28.683 [2024-07-12 11:28:54.661841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.683 [2024-07-12 11:28:54.661874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.683 qpair failed and we were unable to recover it. 00:24:28.683 [2024-07-12 11:28:54.661959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.683 [2024-07-12 11:28:54.661984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.683 qpair failed and we were unable to recover it. 00:24:28.683 [2024-07-12 11:28:54.662075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.683 [2024-07-12 11:28:54.662102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.683 qpair failed and we were unable to recover it. 00:24:28.683 [2024-07-12 11:28:54.662210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.683 [2024-07-12 11:28:54.662235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.683 qpair failed and we were unable to recover it. 00:24:28.683 [2024-07-12 11:28:54.662317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.683 [2024-07-12 11:28:54.662344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.683 qpair failed and we were unable to recover it. 00:24:28.683 [2024-07-12 11:28:54.662424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.683 [2024-07-12 11:28:54.662454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.683 qpair failed and we were unable to recover it. 00:24:28.683 [2024-07-12 11:28:54.662570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.683 [2024-07-12 11:28:54.662598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.683 qpair failed and we were unable to recover it. 00:24:28.683 [2024-07-12 11:28:54.662681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.683 [2024-07-12 11:28:54.662707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.683 qpair failed and we were unable to recover it. 00:24:28.683 [2024-07-12 11:28:54.662794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.683 [2024-07-12 11:28:54.662820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.683 qpair failed and we were unable to recover it. 00:24:28.683 [2024-07-12 11:28:54.662905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.683 [2024-07-12 11:28:54.662931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.683 qpair failed and we were unable to recover it. 00:24:28.683 [2024-07-12 11:28:54.663007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.683 [2024-07-12 11:28:54.663034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.683 qpair failed and we were unable to recover it. 00:24:28.683 [2024-07-12 11:28:54.663116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.683 [2024-07-12 11:28:54.663142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.683 qpair failed and we were unable to recover it. 00:24:28.683 [2024-07-12 11:28:54.663221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.683 [2024-07-12 11:28:54.663247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.683 qpair failed and we were unable to recover it. 00:24:28.683 [2024-07-12 11:28:54.663335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.683 [2024-07-12 11:28:54.663363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.683 qpair failed and we were unable to recover it. 00:24:28.683 [2024-07-12 11:28:54.663479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.683 [2024-07-12 11:28:54.663506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.683 qpair failed and we were unable to recover it. 00:24:28.683 [2024-07-12 11:28:54.663589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.683 [2024-07-12 11:28:54.663618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.683 qpair failed and we were unable to recover it. 00:24:28.683 [2024-07-12 11:28:54.663699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.683 [2024-07-12 11:28:54.663725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.683 qpair failed and we were unable to recover it. 00:24:28.683 [2024-07-12 11:28:54.663804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.683 [2024-07-12 11:28:54.663830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.683 qpair failed and we were unable to recover it. 00:24:28.683 [2024-07-12 11:28:54.663920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.683 [2024-07-12 11:28:54.663947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.683 qpair failed and we were unable to recover it. 00:24:28.683 [2024-07-12 11:28:54.664031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.683 [2024-07-12 11:28:54.664057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.683 qpair failed and we were unable to recover it. 00:24:28.683 [2024-07-12 11:28:54.664142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.683 [2024-07-12 11:28:54.664167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.683 qpair failed and we were unable to recover it. 00:24:28.683 [2024-07-12 11:28:54.664248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.683 [2024-07-12 11:28:54.664274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.683 qpair failed and we were unable to recover it. 00:24:28.683 [2024-07-12 11:28:54.664346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.683 [2024-07-12 11:28:54.664371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.683 qpair failed and we were unable to recover it. 00:24:28.683 [2024-07-12 11:28:54.664475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.683 [2024-07-12 11:28:54.664500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.683 qpair failed and we were unable to recover it. 00:24:28.683 [2024-07-12 11:28:54.664573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.683 [2024-07-12 11:28:54.664599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.683 qpair failed and we were unable to recover it. 00:24:28.683 [2024-07-12 11:28:54.664696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.683 [2024-07-12 11:28:54.664724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.683 qpair failed and we were unable to recover it. 00:24:28.683 [2024-07-12 11:28:54.664807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.683 [2024-07-12 11:28:54.664835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.683 qpair failed and we were unable to recover it. 00:24:28.683 [2024-07-12 11:28:54.664939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.683 [2024-07-12 11:28:54.664969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.683 qpair failed and we were unable to recover it. 00:24:28.683 [2024-07-12 11:28:54.665059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.683 [2024-07-12 11:28:54.665086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.683 qpair failed and we were unable to recover it. 00:24:28.684 [2024-07-12 11:28:54.665181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.684 [2024-07-12 11:28:54.665215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.684 qpair failed and we were unable to recover it. 00:24:28.684 [2024-07-12 11:28:54.665303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.684 [2024-07-12 11:28:54.665329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.684 qpair failed and we were unable to recover it. 00:24:28.684 [2024-07-12 11:28:54.665435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.684 [2024-07-12 11:28:54.665463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.684 qpair failed and we were unable to recover it. 00:24:28.684 [2024-07-12 11:28:54.665548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.684 [2024-07-12 11:28:54.665576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.684 qpair failed and we were unable to recover it. 00:24:28.684 [2024-07-12 11:28:54.665655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.684 [2024-07-12 11:28:54.665684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.684 qpair failed and we were unable to recover it. 00:24:28.684 [2024-07-12 11:28:54.665766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.684 [2024-07-12 11:28:54.665791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.684 qpair failed and we were unable to recover it. 00:24:28.684 [2024-07-12 11:28:54.665945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.684 [2024-07-12 11:28:54.665973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.684 qpair failed and we were unable to recover it. 00:24:28.684 [2024-07-12 11:28:54.666056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.684 [2024-07-12 11:28:54.666082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.684 qpair failed and we were unable to recover it. 00:24:28.684 [2024-07-12 11:28:54.666167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.684 [2024-07-12 11:28:54.666193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.684 qpair failed and we were unable to recover it. 00:24:28.684 [2024-07-12 11:28:54.666276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.684 [2024-07-12 11:28:54.666302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.684 qpair failed and we were unable to recover it. 00:24:28.684 [2024-07-12 11:28:54.666381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.684 [2024-07-12 11:28:54.666406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.684 qpair failed and we were unable to recover it. 00:24:28.684 [2024-07-12 11:28:54.666491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.684 [2024-07-12 11:28:54.666516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.684 qpair failed and we were unable to recover it. 00:24:28.684 [2024-07-12 11:28:54.666642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.684 [2024-07-12 11:28:54.666681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.684 qpair failed and we were unable to recover it. 00:24:28.684 [2024-07-12 11:28:54.666776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.684 [2024-07-12 11:28:54.666808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.684 qpair failed and we were unable to recover it. 00:24:28.684 [2024-07-12 11:28:54.666898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.684 [2024-07-12 11:28:54.666926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.684 qpair failed and we were unable to recover it. 00:24:28.684 [2024-07-12 11:28:54.667011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.684 [2024-07-12 11:28:54.667038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.684 qpair failed and we were unable to recover it. 00:24:28.684 [2024-07-12 11:28:54.667121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.684 [2024-07-12 11:28:54.667147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.684 qpair failed and we were unable to recover it. 00:24:28.684 [2024-07-12 11:28:54.667229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.684 [2024-07-12 11:28:54.667254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.684 qpair failed and we were unable to recover it. 00:24:28.684 [2024-07-12 11:28:54.667337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.684 [2024-07-12 11:28:54.667365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.684 qpair failed and we were unable to recover it. 00:24:28.684 [2024-07-12 11:28:54.667460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.684 [2024-07-12 11:28:54.667486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.684 qpair failed and we were unable to recover it. 00:24:28.684 [2024-07-12 11:28:54.667585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.684 [2024-07-12 11:28:54.667626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.684 qpair failed and we were unable to recover it. 00:24:28.684 [2024-07-12 11:28:54.667747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.684 [2024-07-12 11:28:54.667775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.684 qpair failed and we were unable to recover it. 00:24:28.684 [2024-07-12 11:28:54.667859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.684 [2024-07-12 11:28:54.667906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.684 qpair failed and we were unable to recover it. 00:24:28.684 [2024-07-12 11:28:54.667996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.684 [2024-07-12 11:28:54.668023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.684 qpair failed and we were unable to recover it. 00:24:28.684 [2024-07-12 11:28:54.668105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.684 [2024-07-12 11:28:54.668130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.684 qpair failed and we were unable to recover it. 00:24:28.684 [2024-07-12 11:28:54.668209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.684 [2024-07-12 11:28:54.668235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.684 qpair failed and we were unable to recover it. 00:24:28.684 [2024-07-12 11:28:54.668312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.684 [2024-07-12 11:28:54.668337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.684 qpair failed and we were unable to recover it. 00:24:28.684 [2024-07-12 11:28:54.668461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.684 [2024-07-12 11:28:54.668489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.684 qpair failed and we were unable to recover it. 00:24:28.684 [2024-07-12 11:28:54.668583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.684 [2024-07-12 11:28:54.668622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.684 qpair failed and we were unable to recover it. 00:24:28.684 [2024-07-12 11:28:54.668709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.684 [2024-07-12 11:28:54.668736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.684 qpair failed and we were unable to recover it. 00:24:28.684 [2024-07-12 11:28:54.668824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.684 [2024-07-12 11:28:54.668854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.684 qpair failed and we were unable to recover it. 00:24:28.684 [2024-07-12 11:28:54.668958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.684 [2024-07-12 11:28:54.668985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.684 qpair failed and we were unable to recover it. 00:24:28.684 [2024-07-12 11:28:54.669077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.684 [2024-07-12 11:28:54.669103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.684 qpair failed and we were unable to recover it. 00:24:28.684 [2024-07-12 11:28:54.669185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.684 [2024-07-12 11:28:54.669211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.684 qpair failed and we were unable to recover it. 00:24:28.684 [2024-07-12 11:28:54.669289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.684 [2024-07-12 11:28:54.669314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.684 qpair failed and we were unable to recover it. 00:24:28.684 [2024-07-12 11:28:54.669427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.684 [2024-07-12 11:28:54.669453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.684 qpair failed and we were unable to recover it. 00:24:28.684 [2024-07-12 11:28:54.669540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.684 [2024-07-12 11:28:54.669567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.684 qpair failed and we were unable to recover it. 00:24:28.684 [2024-07-12 11:28:54.669645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.684 [2024-07-12 11:28:54.669671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.684 qpair failed and we were unable to recover it. 00:24:28.684 [2024-07-12 11:28:54.669746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.684 [2024-07-12 11:28:54.669771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.684 qpair failed and we were unable to recover it. 00:24:28.684 [2024-07-12 11:28:54.669850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.684 [2024-07-12 11:28:54.669885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.684 qpair failed and we were unable to recover it. 00:24:28.685 [2024-07-12 11:28:54.669990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.685 [2024-07-12 11:28:54.670029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.685 qpair failed and we were unable to recover it. 00:24:28.685 [2024-07-12 11:28:54.670128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.685 [2024-07-12 11:28:54.670156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.685 qpair failed and we were unable to recover it. 00:24:28.685 [2024-07-12 11:28:54.670243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.685 [2024-07-12 11:28:54.670270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.685 qpair failed and we were unable to recover it. 00:24:28.685 [2024-07-12 11:28:54.670353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.685 [2024-07-12 11:28:54.670379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.685 qpair failed and we were unable to recover it. 00:24:28.685 [2024-07-12 11:28:54.670465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.685 [2024-07-12 11:28:54.670491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.685 qpair failed and we were unable to recover it. 00:24:28.685 [2024-07-12 11:28:54.670566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.685 [2024-07-12 11:28:54.670591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.685 qpair failed and we were unable to recover it. 00:24:28.685 [2024-07-12 11:28:54.670703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.685 [2024-07-12 11:28:54.670729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.685 qpair failed and we were unable to recover it. 00:24:28.685 [2024-07-12 11:28:54.670809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.685 [2024-07-12 11:28:54.670835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.685 qpair failed and we were unable to recover it. 00:24:28.685 [2024-07-12 11:28:54.670923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.685 [2024-07-12 11:28:54.670951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.685 qpair failed and we were unable to recover it. 00:24:28.685 [2024-07-12 11:28:54.671027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.685 [2024-07-12 11:28:54.671053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.685 qpair failed and we were unable to recover it. 00:24:28.685 [2024-07-12 11:28:54.671129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.685 [2024-07-12 11:28:54.671155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.685 qpair failed and we were unable to recover it. 00:24:28.685 [2024-07-12 11:28:54.671235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.685 [2024-07-12 11:28:54.671261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.685 qpair failed and we were unable to recover it. 00:24:28.685 [2024-07-12 11:28:54.671367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.685 [2024-07-12 11:28:54.671392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.685 qpair failed and we were unable to recover it. 00:24:28.685 [2024-07-12 11:28:54.671489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.685 [2024-07-12 11:28:54.671538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.685 qpair failed and we were unable to recover it. 00:24:28.685 [2024-07-12 11:28:54.671623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.685 [2024-07-12 11:28:54.671649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.685 qpair failed and we were unable to recover it. 00:24:28.685 [2024-07-12 11:28:54.671743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.685 [2024-07-12 11:28:54.671783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.685 qpair failed and we were unable to recover it. 00:24:28.685 [2024-07-12 11:28:54.671869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.685 [2024-07-12 11:28:54.671897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.685 qpair failed and we were unable to recover it. 00:24:28.685 [2024-07-12 11:28:54.671990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.685 [2024-07-12 11:28:54.672019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.685 qpair failed and we were unable to recover it. 00:24:28.685 [2024-07-12 11:28:54.672101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.685 [2024-07-12 11:28:54.672127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.685 qpair failed and we were unable to recover it. 00:24:28.685 [2024-07-12 11:28:54.672211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.685 [2024-07-12 11:28:54.672237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.685 qpair failed and we were unable to recover it. 00:24:28.685 [2024-07-12 11:28:54.672331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.685 [2024-07-12 11:28:54.672357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.685 qpair failed and we were unable to recover it. 00:24:28.685 [2024-07-12 11:28:54.672440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.685 [2024-07-12 11:28:54.672466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.685 qpair failed and we were unable to recover it. 00:24:28.685 [2024-07-12 11:28:54.672546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.685 [2024-07-12 11:28:54.672573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.685 qpair failed and we were unable to recover it. 00:24:28.685 [2024-07-12 11:28:54.672652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.685 [2024-07-12 11:28:54.672678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.685 qpair failed and we were unable to recover it. 00:24:28.685 [2024-07-12 11:28:54.672760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.685 [2024-07-12 11:28:54.672786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.685 qpair failed and we were unable to recover it. 00:24:28.685 [2024-07-12 11:28:54.672875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.685 [2024-07-12 11:28:54.672902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.685 qpair failed and we were unable to recover it. 00:24:28.685 [2024-07-12 11:28:54.672988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.685 [2024-07-12 11:28:54.673015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.685 qpair failed and we were unable to recover it. 00:24:28.685 [2024-07-12 11:28:54.673103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.685 [2024-07-12 11:28:54.673129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.685 qpair failed and we were unable to recover it. 00:24:28.685 [2024-07-12 11:28:54.673213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.685 [2024-07-12 11:28:54.673240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.685 qpair failed and we were unable to recover it. 00:24:28.685 [2024-07-12 11:28:54.673321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.685 [2024-07-12 11:28:54.673347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.685 qpair failed and we were unable to recover it. 00:24:28.685 [2024-07-12 11:28:54.673444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.685 [2024-07-12 11:28:54.673470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.685 qpair failed and we were unable to recover it. 00:24:28.685 [2024-07-12 11:28:54.673569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.685 [2024-07-12 11:28:54.673608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.685 qpair failed and we were unable to recover it. 00:24:28.685 [2024-07-12 11:28:54.673690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.685 [2024-07-12 11:28:54.673718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.685 qpair failed and we were unable to recover it. 00:24:28.685 [2024-07-12 11:28:54.673799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.685 [2024-07-12 11:28:54.673825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.685 qpair failed and we were unable to recover it. 00:24:28.685 [2024-07-12 11:28:54.673912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.686 [2024-07-12 11:28:54.673939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.686 qpair failed and we were unable to recover it. 00:24:28.686 [2024-07-12 11:28:54.674022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.686 [2024-07-12 11:28:54.674048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.686 qpair failed and we were unable to recover it. 00:24:28.686 [2024-07-12 11:28:54.674132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.686 [2024-07-12 11:28:54.674157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.686 qpair failed and we were unable to recover it. 00:24:28.686 [2024-07-12 11:28:54.674232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.686 [2024-07-12 11:28:54.674258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.686 qpair failed and we were unable to recover it. 00:24:28.686 [2024-07-12 11:28:54.674340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.686 [2024-07-12 11:28:54.674366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.686 qpair failed and we were unable to recover it. 00:24:28.686 [2024-07-12 11:28:54.674442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.686 [2024-07-12 11:28:54.674468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.686 qpair failed and we were unable to recover it. 00:24:28.686 [2024-07-12 11:28:54.674548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.686 [2024-07-12 11:28:54.674589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.686 qpair failed and we were unable to recover it. 00:24:28.686 [2024-07-12 11:28:54.674668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.686 [2024-07-12 11:28:54.674694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.686 qpair failed and we were unable to recover it. 00:24:28.686 [2024-07-12 11:28:54.674776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.686 [2024-07-12 11:28:54.674801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.686 qpair failed and we were unable to recover it. 00:24:28.686 [2024-07-12 11:28:54.674883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.686 [2024-07-12 11:28:54.674910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.686 qpair failed and we were unable to recover it. 00:24:28.686 [2024-07-12 11:28:54.675006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.686 [2024-07-12 11:28:54.675032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.686 qpair failed and we were unable to recover it. 00:24:28.686 [2024-07-12 11:28:54.675118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.686 [2024-07-12 11:28:54.675144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.686 qpair failed and we were unable to recover it. 00:24:28.686 [2024-07-12 11:28:54.675227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.686 [2024-07-12 11:28:54.675253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.686 qpair failed and we were unable to recover it. 00:24:28.686 [2024-07-12 11:28:54.675331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.686 [2024-07-12 11:28:54.675357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.686 qpair failed and we were unable to recover it. 00:24:28.686 [2024-07-12 11:28:54.675436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.686 [2024-07-12 11:28:54.675462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.686 qpair failed and we were unable to recover it. 00:24:28.686 [2024-07-12 11:28:54.675543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.686 [2024-07-12 11:28:54.675570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.686 qpair failed and we were unable to recover it. 00:24:28.686 [2024-07-12 11:28:54.675656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.686 [2024-07-12 11:28:54.675683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.686 qpair failed and we were unable to recover it. 00:24:28.686 [2024-07-12 11:28:54.675768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.686 [2024-07-12 11:28:54.675794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.686 qpair failed and we were unable to recover it. 00:24:28.686 [2024-07-12 11:28:54.675879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.686 [2024-07-12 11:28:54.675908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.686 qpair failed and we were unable to recover it. 00:24:28.686 [2024-07-12 11:28:54.676007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.686 [2024-07-12 11:28:54.676033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.686 qpair failed and we were unable to recover it. 00:24:28.686 [2024-07-12 11:28:54.676125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.686 [2024-07-12 11:28:54.676155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.686 qpair failed and we were unable to recover it. 00:24:28.686 [2024-07-12 11:28:54.676240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.686 [2024-07-12 11:28:54.676267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.686 qpair failed and we were unable to recover it. 00:24:28.686 [2024-07-12 11:28:54.676346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.686 [2024-07-12 11:28:54.676372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.686 qpair failed and we were unable to recover it. 00:24:28.686 [2024-07-12 11:28:54.676456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.686 [2024-07-12 11:28:54.676482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.686 qpair failed and we were unable to recover it. 00:24:28.686 [2024-07-12 11:28:54.676558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.686 [2024-07-12 11:28:54.676585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.686 qpair failed and we were unable to recover it. 00:24:28.686 [2024-07-12 11:28:54.676665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.686 [2024-07-12 11:28:54.676690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.686 qpair failed and we were unable to recover it. 00:24:28.686 [2024-07-12 11:28:54.676806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.686 [2024-07-12 11:28:54.676833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.686 qpair failed and we were unable to recover it. 00:24:28.686 [2024-07-12 11:28:54.676943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.686 [2024-07-12 11:28:54.676971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.686 qpair failed and we were unable to recover it. 00:24:28.686 [2024-07-12 11:28:54.677049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.686 [2024-07-12 11:28:54.677076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.686 qpair failed and we were unable to recover it. 00:24:28.686 [2024-07-12 11:28:54.677159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.686 [2024-07-12 11:28:54.677186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.686 qpair failed and we were unable to recover it. 00:24:28.686 [2024-07-12 11:28:54.677295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.686 [2024-07-12 11:28:54.677323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.686 qpair failed and we were unable to recover it. 00:24:28.686 [2024-07-12 11:28:54.677404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.686 [2024-07-12 11:28:54.677430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.686 qpair failed and we were unable to recover it. 00:24:28.686 [2024-07-12 11:28:54.677510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.686 [2024-07-12 11:28:54.677537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.686 qpair failed and we were unable to recover it. 00:24:28.686 [2024-07-12 11:28:54.677615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.686 [2024-07-12 11:28:54.677646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.686 qpair failed and we were unable to recover it. 00:24:28.686 [2024-07-12 11:28:54.677736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.686 [2024-07-12 11:28:54.677777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.686 qpair failed and we were unable to recover it. 00:24:28.686 [2024-07-12 11:28:54.677876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.686 [2024-07-12 11:28:54.677904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.686 qpair failed and we were unable to recover it. 00:24:28.686 [2024-07-12 11:28:54.677995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.686 [2024-07-12 11:28:54.678023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.686 qpair failed and we were unable to recover it. 00:24:28.686 [2024-07-12 11:28:54.678104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.686 [2024-07-12 11:28:54.678130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.686 qpair failed and we were unable to recover it. 00:24:28.686 [2024-07-12 11:28:54.678213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.686 [2024-07-12 11:28:54.678240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.686 qpair failed and we were unable to recover it. 00:24:28.686 [2024-07-12 11:28:54.678331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.686 [2024-07-12 11:28:54.678359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.686 qpair failed and we were unable to recover it. 00:24:28.686 [2024-07-12 11:28:54.678476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.686 [2024-07-12 11:28:54.678504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.686 qpair failed and we were unable to recover it. 00:24:28.687 [2024-07-12 11:28:54.678584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.687 [2024-07-12 11:28:54.678611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.687 qpair failed and we were unable to recover it. 00:24:28.687 [2024-07-12 11:28:54.678698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.687 [2024-07-12 11:28:54.678724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.687 qpair failed and we were unable to recover it. 00:24:28.687 [2024-07-12 11:28:54.678805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.687 [2024-07-12 11:28:54.678832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.687 qpair failed and we were unable to recover it. 00:24:28.687 [2024-07-12 11:28:54.678927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.687 [2024-07-12 11:28:54.678957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.687 qpair failed and we were unable to recover it. 00:24:28.687 [2024-07-12 11:28:54.679045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.687 [2024-07-12 11:28:54.679073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.687 qpair failed and we were unable to recover it. 00:24:28.687 [2024-07-12 11:28:54.679152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.687 [2024-07-12 11:28:54.679179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.687 qpair failed and we were unable to recover it. 00:24:28.687 [2024-07-12 11:28:54.679270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.687 [2024-07-12 11:28:54.679297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.687 qpair failed and we were unable to recover it. 00:24:28.687 [2024-07-12 11:28:54.679384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.687 [2024-07-12 11:28:54.679412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.687 qpair failed and we were unable to recover it. 00:24:28.687 [2024-07-12 11:28:54.679492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.687 [2024-07-12 11:28:54.679521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.687 qpair failed and we were unable to recover it. 00:24:28.687 [2024-07-12 11:28:54.679608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.687 [2024-07-12 11:28:54.679634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.687 qpair failed and we were unable to recover it. 00:24:28.687 [2024-07-12 11:28:54.679714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.687 [2024-07-12 11:28:54.679740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.687 qpair failed and we were unable to recover it. 00:24:28.687 [2024-07-12 11:28:54.679824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.687 [2024-07-12 11:28:54.679850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.687 qpair failed and we were unable to recover it. 00:24:28.687 [2024-07-12 11:28:54.679939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.687 [2024-07-12 11:28:54.679965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.687 qpair failed and we were unable to recover it. 00:24:28.687 [2024-07-12 11:28:54.680048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.687 [2024-07-12 11:28:54.680075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.687 qpair failed and we were unable to recover it. 00:24:28.687 [2024-07-12 11:28:54.680165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.687 [2024-07-12 11:28:54.680191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.687 qpair failed and we were unable to recover it. 00:24:28.687 [2024-07-12 11:28:54.680268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.687 [2024-07-12 11:28:54.680294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.687 qpair failed and we were unable to recover it. 00:24:28.687 [2024-07-12 11:28:54.680378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.687 [2024-07-12 11:28:54.680406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.687 qpair failed and we were unable to recover it. 00:24:28.687 [2024-07-12 11:28:54.680521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.687 [2024-07-12 11:28:54.680550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.687 qpair failed and we were unable to recover it. 00:24:28.687 [2024-07-12 11:28:54.680645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.687 [2024-07-12 11:28:54.680675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.687 qpair failed and we were unable to recover it. 00:24:28.687 [2024-07-12 11:28:54.680782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.687 [2024-07-12 11:28:54.680809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.687 qpair failed and we were unable to recover it. 00:24:28.687 [2024-07-12 11:28:54.680897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.687 [2024-07-12 11:28:54.680924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.687 qpair failed and we were unable to recover it. 00:24:28.687 [2024-07-12 11:28:54.681008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.687 [2024-07-12 11:28:54.681034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.687 qpair failed and we were unable to recover it. 00:24:28.687 [2024-07-12 11:28:54.681118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.687 [2024-07-12 11:28:54.681145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.687 qpair failed and we were unable to recover it. 00:24:28.687 [2024-07-12 11:28:54.681275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.687 [2024-07-12 11:28:54.681301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.687 qpair failed and we were unable to recover it. 00:24:28.687 [2024-07-12 11:28:54.681376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.687 [2024-07-12 11:28:54.681402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.687 qpair failed and we were unable to recover it. 00:24:28.687 [2024-07-12 11:28:54.681482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.687 [2024-07-12 11:28:54.681508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.687 qpair failed and we were unable to recover it. 00:24:28.687 [2024-07-12 11:28:54.681584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.687 [2024-07-12 11:28:54.681610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.687 qpair failed and we were unable to recover it. 00:24:28.687 [2024-07-12 11:28:54.681694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.687 [2024-07-12 11:28:54.681724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.687 qpair failed and we were unable to recover it. 00:24:28.687 [2024-07-12 11:28:54.681832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.687 [2024-07-12 11:28:54.681858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.687 qpair failed and we were unable to recover it. 00:24:28.687 [2024-07-12 11:28:54.681953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.687 [2024-07-12 11:28:54.681980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.687 qpair failed and we were unable to recover it. 00:24:28.687 [2024-07-12 11:28:54.682060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.687 [2024-07-12 11:28:54.682086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.687 qpair failed and we were unable to recover it. 00:24:28.687 [2024-07-12 11:28:54.682170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.687 [2024-07-12 11:28:54.682196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.687 qpair failed and we were unable to recover it. 00:24:28.687 [2024-07-12 11:28:54.682278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.687 [2024-07-12 11:28:54.682304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.687 qpair failed and we were unable to recover it. 00:24:28.687 [2024-07-12 11:28:54.682397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.687 [2024-07-12 11:28:54.682425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.687 qpair failed and we were unable to recover it. 00:24:28.687 [2024-07-12 11:28:54.682509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.687 [2024-07-12 11:28:54.682535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.687 qpair failed and we were unable to recover it. 00:24:28.687 [2024-07-12 11:28:54.682609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.687 [2024-07-12 11:28:54.682635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.687 qpair failed and we were unable to recover it. 00:24:28.687 [2024-07-12 11:28:54.682715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.687 [2024-07-12 11:28:54.682741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.687 qpair failed and we were unable to recover it. 00:24:28.687 [2024-07-12 11:28:54.682833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.687 [2024-07-12 11:28:54.682859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.687 qpair failed and we were unable to recover it. 00:24:28.687 [2024-07-12 11:28:54.682952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.687 [2024-07-12 11:28:54.682978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.687 qpair failed and we were unable to recover it. 00:24:28.687 [2024-07-12 11:28:54.683057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.687 [2024-07-12 11:28:54.683083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.687 qpair failed and we were unable to recover it. 00:24:28.687 [2024-07-12 11:28:54.683201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.687 [2024-07-12 11:28:54.683228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.687 qpair failed and we were unable to recover it. 00:24:28.688 [2024-07-12 11:28:54.683313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.688 [2024-07-12 11:28:54.683338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.688 qpair failed and we were unable to recover it. 00:24:28.688 [2024-07-12 11:28:54.683416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.688 [2024-07-12 11:28:54.683442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.688 qpair failed and we were unable to recover it. 00:24:28.688 [2024-07-12 11:28:54.683556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.688 [2024-07-12 11:28:54.683581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.688 qpair failed and we were unable to recover it. 00:24:28.688 [2024-07-12 11:28:54.683693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.688 [2024-07-12 11:28:54.683719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.688 qpair failed and we were unable to recover it. 00:24:28.688 [2024-07-12 11:28:54.683805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.688 [2024-07-12 11:28:54.683834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.688 qpair failed and we were unable to recover it. 00:24:28.688 [2024-07-12 11:28:54.683936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.688 [2024-07-12 11:28:54.683966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.688 qpair failed and we were unable to recover it. 00:24:28.688 [2024-07-12 11:28:54.684068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.688 [2024-07-12 11:28:54.684107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.688 qpair failed and we were unable to recover it. 00:24:28.688 [2024-07-12 11:28:54.684202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.688 [2024-07-12 11:28:54.684230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.688 qpair failed and we were unable to recover it. 00:24:28.688 [2024-07-12 11:28:54.684334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.688 [2024-07-12 11:28:54.684361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.688 qpair failed and we were unable to recover it. 00:24:28.688 [2024-07-12 11:28:54.684476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.688 [2024-07-12 11:28:54.684502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.688 qpair failed and we were unable to recover it. 00:24:28.688 [2024-07-12 11:28:54.684633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.688 [2024-07-12 11:28:54.684662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.688 qpair failed and we were unable to recover it. 00:24:28.688 [2024-07-12 11:28:54.684744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.688 [2024-07-12 11:28:54.684770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.688 qpair failed and we were unable to recover it. 00:24:28.688 [2024-07-12 11:28:54.684882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.688 [2024-07-12 11:28:54.684909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.688 qpair failed and we were unable to recover it. 00:24:28.688 [2024-07-12 11:28:54.685036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.688 [2024-07-12 11:28:54.685062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.688 qpair failed and we were unable to recover it. 00:24:28.688 [2024-07-12 11:28:54.685140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.688 [2024-07-12 11:28:54.685166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.688 qpair failed and we were unable to recover it. 00:24:28.688 [2024-07-12 11:28:54.685250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.688 [2024-07-12 11:28:54.685276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.688 qpair failed and we were unable to recover it. 00:24:28.688 [2024-07-12 11:28:54.685401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.688 [2024-07-12 11:28:54.685428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.688 qpair failed and we were unable to recover it. 00:24:28.688 [2024-07-12 11:28:54.685538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.688 [2024-07-12 11:28:54.685565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.688 qpair failed and we were unable to recover it. 00:24:28.688 [2024-07-12 11:28:54.685669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.688 [2024-07-12 11:28:54.685695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.688 qpair failed and we were unable to recover it. 00:24:28.688 [2024-07-12 11:28:54.685788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.688 [2024-07-12 11:28:54.685817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.688 qpair failed and we were unable to recover it. 00:24:28.688 [2024-07-12 11:28:54.685915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.688 [2024-07-12 11:28:54.685944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.688 qpair failed and we were unable to recover it. 00:24:28.688 [2024-07-12 11:28:54.686044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.688 [2024-07-12 11:28:54.686070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.688 qpair failed and we were unable to recover it. 00:24:28.688 [2024-07-12 11:28:54.686146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.688 [2024-07-12 11:28:54.686172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.688 qpair failed and we were unable to recover it. 00:24:28.688 [2024-07-12 11:28:54.686289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.688 [2024-07-12 11:28:54.686314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.688 qpair failed and we were unable to recover it. 00:24:28.688 [2024-07-12 11:28:54.686395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.688 [2024-07-12 11:28:54.686422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.688 qpair failed and we were unable to recover it. 00:24:28.688 [2024-07-12 11:28:54.686505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.688 [2024-07-12 11:28:54.686532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.688 qpair failed and we were unable to recover it. 00:24:28.688 [2024-07-12 11:28:54.686618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.688 [2024-07-12 11:28:54.686644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.688 qpair failed and we were unable to recover it. 00:24:28.688 [2024-07-12 11:28:54.686732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.688 [2024-07-12 11:28:54.686761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.688 qpair failed and we were unable to recover it. 00:24:28.688 [2024-07-12 11:28:54.686878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.688 [2024-07-12 11:28:54.686907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.688 qpair failed and we were unable to recover it. 00:24:28.688 [2024-07-12 11:28:54.687002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.688 [2024-07-12 11:28:54.687043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.688 qpair failed and we were unable to recover it. 00:24:28.688 [2024-07-12 11:28:54.687131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.688 [2024-07-12 11:28:54.687158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.688 qpair failed and we were unable to recover it. 00:24:28.688 [2024-07-12 11:28:54.687261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.688 [2024-07-12 11:28:54.687289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.688 qpair failed and we were unable to recover it. 00:24:28.688 [2024-07-12 11:28:54.687377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.688 [2024-07-12 11:28:54.687404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.688 qpair failed and we were unable to recover it. 00:24:28.688 [2024-07-12 11:28:54.687495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.688 [2024-07-12 11:28:54.687523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.688 qpair failed and we were unable to recover it. 00:24:28.688 [2024-07-12 11:28:54.687604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.688 [2024-07-12 11:28:54.687631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.688 qpair failed and we were unable to recover it. 00:24:28.688 [2024-07-12 11:28:54.687744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.688 [2024-07-12 11:28:54.687770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.688 qpair failed and we were unable to recover it. 00:24:28.688 [2024-07-12 11:28:54.687857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.688 [2024-07-12 11:28:54.687890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.688 qpair failed and we were unable to recover it. 00:24:28.688 [2024-07-12 11:28:54.687976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.688 [2024-07-12 11:28:54.688002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.688 qpair failed and we were unable to recover it. 00:24:28.688 [2024-07-12 11:28:54.688081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.688 [2024-07-12 11:28:54.688107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.688 qpair failed and we were unable to recover it. 00:24:28.688 [2024-07-12 11:28:54.688192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.688 [2024-07-12 11:28:54.688220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.688 qpair failed and we were unable to recover it. 00:24:28.688 [2024-07-12 11:28:54.688329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.688 [2024-07-12 11:28:54.688355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.688 qpair failed and we were unable to recover it. 00:24:28.689 [2024-07-12 11:28:54.688442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.689 [2024-07-12 11:28:54.688471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.689 qpair failed and we were unable to recover it. 00:24:28.689 [2024-07-12 11:28:54.688557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.689 [2024-07-12 11:28:54.688583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.689 qpair failed and we were unable to recover it. 00:24:28.689 [2024-07-12 11:28:54.688672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.689 [2024-07-12 11:28:54.688701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.689 qpair failed and we were unable to recover it. 00:24:28.689 [2024-07-12 11:28:54.688786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.689 [2024-07-12 11:28:54.688815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.689 qpair failed and we were unable to recover it. 00:24:28.689 [2024-07-12 11:28:54.688904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.689 [2024-07-12 11:28:54.688932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.689 qpair failed and we were unable to recover it. 00:24:28.689 [2024-07-12 11:28:54.689019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.689 [2024-07-12 11:28:54.689048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.689 qpair failed and we were unable to recover it. 00:24:28.689 [2024-07-12 11:28:54.689161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.689 [2024-07-12 11:28:54.689192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.689 qpair failed and we were unable to recover it. 00:24:28.689 [2024-07-12 11:28:54.689282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.689 [2024-07-12 11:28:54.689310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.689 qpair failed and we were unable to recover it. 00:24:28.689 [2024-07-12 11:28:54.689387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.689 [2024-07-12 11:28:54.689415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.689 qpair failed and we were unable to recover it. 00:24:28.689 [2024-07-12 11:28:54.689500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.689 [2024-07-12 11:28:54.689528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.689 qpair failed and we were unable to recover it. 00:24:28.689 [2024-07-12 11:28:54.689620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.689 [2024-07-12 11:28:54.689660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.689 qpair failed and we were unable to recover it. 00:24:28.689 [2024-07-12 11:28:54.689752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.689 [2024-07-12 11:28:54.689779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.689 qpair failed and we were unable to recover it. 00:24:28.689 [2024-07-12 11:28:54.689857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.689 [2024-07-12 11:28:54.689897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.689 qpair failed and we were unable to recover it. 00:24:28.689 [2024-07-12 11:28:54.689992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.689 [2024-07-12 11:28:54.690018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.689 qpair failed and we were unable to recover it. 00:24:28.689 [2024-07-12 11:28:54.690100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.689 [2024-07-12 11:28:54.690126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.689 qpair failed and we were unable to recover it. 00:24:28.689 [2024-07-12 11:28:54.690204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.689 [2024-07-12 11:28:54.690230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.689 qpair failed and we were unable to recover it. 00:24:28.689 [2024-07-12 11:28:54.690344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.689 [2024-07-12 11:28:54.690372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.689 qpair failed and we were unable to recover it. 00:24:28.689 [2024-07-12 11:28:54.690483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.689 [2024-07-12 11:28:54.690509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.689 qpair failed and we were unable to recover it. 00:24:28.689 [2024-07-12 11:28:54.690612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.689 [2024-07-12 11:28:54.690653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.689 qpair failed and we were unable to recover it. 00:24:28.689 [2024-07-12 11:28:54.690769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.689 [2024-07-12 11:28:54.690796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.689 qpair failed and we were unable to recover it. 00:24:28.689 [2024-07-12 11:28:54.690879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.689 [2024-07-12 11:28:54.690909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.689 qpair failed and we were unable to recover it. 00:24:28.689 [2024-07-12 11:28:54.691017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.689 [2024-07-12 11:28:54.691043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.689 qpair failed and we were unable to recover it. 00:24:28.689 [2024-07-12 11:28:54.691126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.689 [2024-07-12 11:28:54.691154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.689 qpair failed and we were unable to recover it. 00:24:28.689 [2024-07-12 11:28:54.691241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.689 [2024-07-12 11:28:54.691268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.689 qpair failed and we were unable to recover it. 00:24:28.689 [2024-07-12 11:28:54.691349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.689 [2024-07-12 11:28:54.691378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.689 qpair failed and we were unable to recover it. 00:24:28.689 [2024-07-12 11:28:54.691464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.689 [2024-07-12 11:28:54.691492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.689 qpair failed and we were unable to recover it. 00:24:28.689 [2024-07-12 11:28:54.691578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.689 [2024-07-12 11:28:54.691606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.689 qpair failed and we were unable to recover it. 00:24:28.689 [2024-07-12 11:28:54.691722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.689 [2024-07-12 11:28:54.691748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.689 qpair failed and we were unable to recover it. 00:24:28.689 [2024-07-12 11:28:54.691825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.689 [2024-07-12 11:28:54.691851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.689 qpair failed and we were unable to recover it. 00:24:28.689 [2024-07-12 11:28:54.691943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.689 [2024-07-12 11:28:54.691969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.689 qpair failed and we were unable to recover it. 00:24:28.689 [2024-07-12 11:28:54.692044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.689 [2024-07-12 11:28:54.692070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.689 qpair failed and we were unable to recover it. 00:24:28.689 [2024-07-12 11:28:54.692151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.689 [2024-07-12 11:28:54.692185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.689 qpair failed and we were unable to recover it. 00:24:28.689 [2024-07-12 11:28:54.692297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.689 [2024-07-12 11:28:54.692323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.689 qpair failed and we were unable to recover it. 00:24:28.689 [2024-07-12 11:28:54.692411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.689 [2024-07-12 11:28:54.692439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.689 qpair failed and we were unable to recover it. 00:24:28.689 [2024-07-12 11:28:54.692519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.689 [2024-07-12 11:28:54.692547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.689 qpair failed and we were unable to recover it. 00:24:28.689 [2024-07-12 11:28:54.692630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.690 [2024-07-12 11:28:54.692659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.690 qpair failed and we were unable to recover it. 00:24:28.690 [2024-07-12 11:28:54.692740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.690 [2024-07-12 11:28:54.692767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.690 qpair failed and we were unable to recover it. 00:24:28.690 [2024-07-12 11:28:54.692879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.690 [2024-07-12 11:28:54.692906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.690 qpair failed and we were unable to recover it. 00:24:28.690 [2024-07-12 11:28:54.693015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.690 [2024-07-12 11:28:54.693041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.690 qpair failed and we were unable to recover it. 00:24:28.690 [2024-07-12 11:28:54.693121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.690 [2024-07-12 11:28:54.693148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.690 qpair failed and we were unable to recover it. 00:24:28.690 [2024-07-12 11:28:54.693262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.690 [2024-07-12 11:28:54.693288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.690 qpair failed and we were unable to recover it. 00:24:28.690 [2024-07-12 11:28:54.693374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.690 [2024-07-12 11:28:54.693403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.690 qpair failed and we were unable to recover it. 00:24:28.690 [2024-07-12 11:28:54.693517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.690 [2024-07-12 11:28:54.693544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.690 qpair failed and we were unable to recover it. 00:24:28.690 [2024-07-12 11:28:54.693645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.690 [2024-07-12 11:28:54.693685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.690 qpair failed and we were unable to recover it. 00:24:28.690 [2024-07-12 11:28:54.693773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.690 [2024-07-12 11:28:54.693800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.690 qpair failed and we were unable to recover it. 00:24:28.690 [2024-07-12 11:28:54.693928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.690 [2024-07-12 11:28:54.693956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.690 qpair failed and we were unable to recover it. 00:24:28.690 [2024-07-12 11:28:54.694047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.690 [2024-07-12 11:28:54.694073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.690 qpair failed and we were unable to recover it. 00:24:28.690 [2024-07-12 11:28:54.694160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.690 [2024-07-12 11:28:54.694187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.690 qpair failed and we were unable to recover it. 00:24:28.690 [2024-07-12 11:28:54.694272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.690 [2024-07-12 11:28:54.694299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.690 qpair failed and we were unable to recover it. 00:24:28.690 [2024-07-12 11:28:54.694389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.690 [2024-07-12 11:28:54.694415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.690 qpair failed and we were unable to recover it. 00:24:28.690 [2024-07-12 11:28:54.694498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.690 [2024-07-12 11:28:54.694527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.690 qpair failed and we were unable to recover it. 00:24:28.690 [2024-07-12 11:28:54.694641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.690 [2024-07-12 11:28:54.694667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.690 qpair failed and we were unable to recover it. 00:24:28.690 [2024-07-12 11:28:54.694755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.690 [2024-07-12 11:28:54.694782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.690 qpair failed and we were unable to recover it. 00:24:28.690 [2024-07-12 11:28:54.694862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.690 [2024-07-12 11:28:54.694894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.690 qpair failed and we were unable to recover it. 00:24:28.690 [2024-07-12 11:28:54.694981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.690 [2024-07-12 11:28:54.695007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.690 qpair failed and we were unable to recover it. 00:24:28.690 [2024-07-12 11:28:54.695116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.690 [2024-07-12 11:28:54.695142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.690 qpair failed and we were unable to recover it. 00:24:28.690 [2024-07-12 11:28:54.695280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.690 [2024-07-12 11:28:54.695307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.690 qpair failed and we were unable to recover it. 00:24:28.690 [2024-07-12 11:28:54.695389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.690 [2024-07-12 11:28:54.695416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.690 qpair failed and we were unable to recover it. 00:24:28.690 [2024-07-12 11:28:54.695506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.690 [2024-07-12 11:28:54.695545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.690 qpair failed and we were unable to recover it. 00:24:28.690 [2024-07-12 11:28:54.695643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.690 [2024-07-12 11:28:54.695671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.690 qpair failed and we were unable to recover it. 00:24:28.690 [2024-07-12 11:28:54.695766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.690 [2024-07-12 11:28:54.695807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.690 qpair failed and we were unable to recover it. 00:24:28.690 [2024-07-12 11:28:54.695908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.690 [2024-07-12 11:28:54.695936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.690 qpair failed and we were unable to recover it. 00:24:28.690 [2024-07-12 11:28:54.696021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.690 [2024-07-12 11:28:54.696047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.690 qpair failed and we were unable to recover it. 00:24:28.690 [2024-07-12 11:28:54.696125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.690 [2024-07-12 11:28:54.696152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.690 qpair failed and we were unable to recover it. 00:24:28.690 [2024-07-12 11:28:54.696262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.690 [2024-07-12 11:28:54.696288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.690 qpair failed and we were unable to recover it. 00:24:28.690 [2024-07-12 11:28:54.696396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.690 [2024-07-12 11:28:54.696422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.690 qpair failed and we were unable to recover it. 00:24:28.690 [2024-07-12 11:28:54.696504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.690 [2024-07-12 11:28:54.696533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.690 qpair failed and we were unable to recover it. 00:24:28.690 [2024-07-12 11:28:54.696619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.690 [2024-07-12 11:28:54.696649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.690 qpair failed and we were unable to recover it. 00:24:28.690 [2024-07-12 11:28:54.696741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.690 [2024-07-12 11:28:54.696770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.690 qpair failed and we were unable to recover it. 00:24:28.690 [2024-07-12 11:28:54.696883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.690 [2024-07-12 11:28:54.696911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.690 qpair failed and we were unable to recover it. 00:24:28.690 [2024-07-12 11:28:54.697002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.690 [2024-07-12 11:28:54.697029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.690 qpair failed and we were unable to recover it. 00:24:28.690 [2024-07-12 11:28:54.697114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.690 [2024-07-12 11:28:54.697145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.690 qpair failed and we were unable to recover it. 00:24:28.690 [2024-07-12 11:28:54.697253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.690 [2024-07-12 11:28:54.697280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.690 qpair failed and we were unable to recover it. 00:24:28.690 [2024-07-12 11:28:54.697371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.690 [2024-07-12 11:28:54.697397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.690 qpair failed and we were unable to recover it. 00:24:28.690 [2024-07-12 11:28:54.697481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.690 [2024-07-12 11:28:54.697507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.690 qpair failed and we were unable to recover it. 00:24:28.690 [2024-07-12 11:28:54.697627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.690 [2024-07-12 11:28:54.697654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.690 qpair failed and we were unable to recover it. 00:24:28.691 [2024-07-12 11:28:54.697750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.691 [2024-07-12 11:28:54.697778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.691 qpair failed and we were unable to recover it. 00:24:28.691 [2024-07-12 11:28:54.697869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.691 [2024-07-12 11:28:54.697899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.691 qpair failed and we were unable to recover it. 00:24:28.691 [2024-07-12 11:28:54.697997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.691 [2024-07-12 11:28:54.698025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.691 qpair failed and we were unable to recover it. 00:24:28.691 [2024-07-12 11:28:54.698114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.691 [2024-07-12 11:28:54.698140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.691 qpair failed and we were unable to recover it. 00:24:28.691 [2024-07-12 11:28:54.698250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.691 [2024-07-12 11:28:54.698276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.691 qpair failed and we were unable to recover it. 00:24:28.691 [2024-07-12 11:28:54.698354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.691 [2024-07-12 11:28:54.698380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.691 qpair failed and we were unable to recover it. 00:24:28.691 [2024-07-12 11:28:54.698466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.691 [2024-07-12 11:28:54.698492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.691 qpair failed and we were unable to recover it. 00:24:28.691 [2024-07-12 11:28:54.698570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.691 [2024-07-12 11:28:54.698596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.691 qpair failed and we were unable to recover it. 00:24:28.691 [2024-07-12 11:28:54.698672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.691 [2024-07-12 11:28:54.698700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.691 qpair failed and we were unable to recover it. 00:24:28.691 [2024-07-12 11:28:54.698786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.691 [2024-07-12 11:28:54.698815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.691 qpair failed and we were unable to recover it. 00:24:28.691 [2024-07-12 11:28:54.698928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.691 [2024-07-12 11:28:54.698955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.691 qpair failed and we were unable to recover it. 00:24:28.691 [2024-07-12 11:28:54.699041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.691 [2024-07-12 11:28:54.699068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.691 qpair failed and we were unable to recover it. 00:24:28.691 [2024-07-12 11:28:54.699154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.691 [2024-07-12 11:28:54.699182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.691 qpair failed and we were unable to recover it. 00:24:28.691 [2024-07-12 11:28:54.699292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.691 [2024-07-12 11:28:54.699319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.691 qpair failed and we were unable to recover it. 00:24:28.691 [2024-07-12 11:28:54.699429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.691 [2024-07-12 11:28:54.699456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.691 qpair failed and we were unable to recover it. 00:24:28.691 [2024-07-12 11:28:54.699544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.691 [2024-07-12 11:28:54.699573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.691 qpair failed and we were unable to recover it. 00:24:28.691 [2024-07-12 11:28:54.699663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.691 [2024-07-12 11:28:54.699690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.691 qpair failed and we were unable to recover it. 00:24:28.691 [2024-07-12 11:28:54.699830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.691 [2024-07-12 11:28:54.699857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.691 qpair failed and we were unable to recover it. 00:24:28.691 [2024-07-12 11:28:54.699958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.691 [2024-07-12 11:28:54.699985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.691 qpair failed and we were unable to recover it. 00:24:28.691 [2024-07-12 11:28:54.700067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.691 [2024-07-12 11:28:54.700093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.691 qpair failed and we were unable to recover it. 00:24:28.691 [2024-07-12 11:28:54.700239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.691 [2024-07-12 11:28:54.700267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.691 qpair failed and we were unable to recover it. 00:24:28.691 [2024-07-12 11:28:54.700349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.691 [2024-07-12 11:28:54.700376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.691 qpair failed and we were unable to recover it. 00:24:28.691 [2024-07-12 11:28:54.700493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.691 [2024-07-12 11:28:54.700524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.691 qpair failed and we were unable to recover it. 00:24:28.691 [2024-07-12 11:28:54.700610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.691 [2024-07-12 11:28:54.700636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.691 qpair failed and we were unable to recover it. 00:24:28.691 [2024-07-12 11:28:54.700750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.691 [2024-07-12 11:28:54.700776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.691 qpair failed and we were unable to recover it. 00:24:28.691 [2024-07-12 11:28:54.700891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.691 [2024-07-12 11:28:54.700918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.691 qpair failed and we were unable to recover it. 00:24:28.691 [2024-07-12 11:28:54.701002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.691 [2024-07-12 11:28:54.701028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.691 qpair failed and we were unable to recover it. 00:24:28.691 [2024-07-12 11:28:54.701108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.691 [2024-07-12 11:28:54.701134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.691 qpair failed and we were unable to recover it. 00:24:28.691 [2024-07-12 11:28:54.701213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.691 [2024-07-12 11:28:54.701239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.691 qpair failed and we were unable to recover it. 00:24:28.691 [2024-07-12 11:28:54.701318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.691 [2024-07-12 11:28:54.701344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.691 qpair failed and we were unable to recover it. 00:24:28.691 [2024-07-12 11:28:54.701450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.691 [2024-07-12 11:28:54.701477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.691 qpair failed and we were unable to recover it. 00:24:28.691 [2024-07-12 11:28:54.701558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.691 [2024-07-12 11:28:54.701585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.691 qpair failed and we were unable to recover it. 00:24:28.691 [2024-07-12 11:28:54.701670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.691 [2024-07-12 11:28:54.701699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.691 qpair failed and we were unable to recover it. 00:24:28.691 [2024-07-12 11:28:54.701828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.691 [2024-07-12 11:28:54.701857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.691 qpair failed and we were unable to recover it. 00:24:28.691 [2024-07-12 11:28:54.701983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.691 [2024-07-12 11:28:54.702010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.691 qpair failed and we were unable to recover it. 00:24:28.691 [2024-07-12 11:28:54.702085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.691 [2024-07-12 11:28:54.702111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.691 qpair failed and we were unable to recover it. 00:24:28.691 [2024-07-12 11:28:54.702202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.691 [2024-07-12 11:28:54.702228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.691 qpair failed and we were unable to recover it. 00:24:28.691 [2024-07-12 11:28:54.702305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.691 [2024-07-12 11:28:54.702331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.691 qpair failed and we were unable to recover it. 00:24:28.691 [2024-07-12 11:28:54.702447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.691 [2024-07-12 11:28:54.702475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.691 qpair failed and we were unable to recover it. 00:24:28.691 [2024-07-12 11:28:54.702575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.691 [2024-07-12 11:28:54.702615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.691 qpair failed and we were unable to recover it. 00:24:28.691 [2024-07-12 11:28:54.702737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.691 [2024-07-12 11:28:54.702765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.692 qpair failed and we were unable to recover it. 00:24:28.692 [2024-07-12 11:28:54.702847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.692 [2024-07-12 11:28:54.702882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.692 qpair failed and we were unable to recover it. 00:24:28.692 [2024-07-12 11:28:54.702996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.692 [2024-07-12 11:28:54.703021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.692 qpair failed and we were unable to recover it. 00:24:28.692 [2024-07-12 11:28:54.703097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.692 [2024-07-12 11:28:54.703122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.692 qpair failed and we were unable to recover it. 00:24:28.692 [2024-07-12 11:28:54.703209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.692 [2024-07-12 11:28:54.703236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.692 qpair failed and we were unable to recover it. 00:24:28.692 [2024-07-12 11:28:54.703325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.692 [2024-07-12 11:28:54.703352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.692 qpair failed and we were unable to recover it. 00:24:28.692 [2024-07-12 11:28:54.703441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.692 [2024-07-12 11:28:54.703469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.692 qpair failed and we were unable to recover it. 00:24:28.692 [2024-07-12 11:28:54.703583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.692 [2024-07-12 11:28:54.703623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.692 qpair failed and we were unable to recover it. 00:24:28.692 [2024-07-12 11:28:54.703707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.692 [2024-07-12 11:28:54.703734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.692 qpair failed and we were unable to recover it. 00:24:28.692 [2024-07-12 11:28:54.703834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.692 [2024-07-12 11:28:54.703880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.692 qpair failed and we were unable to recover it. 00:24:28.692 [2024-07-12 11:28:54.704009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.692 [2024-07-12 11:28:54.704037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.692 qpair failed and we were unable to recover it. 00:24:28.692 [2024-07-12 11:28:54.704127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.692 [2024-07-12 11:28:54.704155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.692 qpair failed and we were unable to recover it. 00:24:28.692 [2024-07-12 11:28:54.704239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.692 [2024-07-12 11:28:54.704265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.692 qpair failed and we were unable to recover it. 00:24:28.692 [2024-07-12 11:28:54.704349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.692 [2024-07-12 11:28:54.704375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.692 qpair failed and we were unable to recover it. 00:24:28.692 [2024-07-12 11:28:54.704456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.692 [2024-07-12 11:28:54.704485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.692 qpair failed and we were unable to recover it. 00:24:28.692 [2024-07-12 11:28:54.704596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.692 [2024-07-12 11:28:54.704622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.692 qpair failed and we were unable to recover it. 00:24:28.692 [2024-07-12 11:28:54.704707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.692 [2024-07-12 11:28:54.704735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.692 qpair failed and we were unable to recover it. 00:24:28.692 [2024-07-12 11:28:54.704821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.692 [2024-07-12 11:28:54.704847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.692 qpair failed and we were unable to recover it. 00:24:28.692 [2024-07-12 11:28:54.704966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.692 [2024-07-12 11:28:54.704993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.692 qpair failed and we were unable to recover it. 00:24:28.692 [2024-07-12 11:28:54.705070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.692 [2024-07-12 11:28:54.705097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.692 qpair failed and we were unable to recover it. 00:24:28.692 [2024-07-12 11:28:54.705178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.692 [2024-07-12 11:28:54.705204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.692 qpair failed and we were unable to recover it. 00:24:28.692 [2024-07-12 11:28:54.705279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.692 [2024-07-12 11:28:54.705306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.692 qpair failed and we were unable to recover it. 00:24:28.692 [2024-07-12 11:28:54.705418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.692 [2024-07-12 11:28:54.705450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.692 qpair failed and we were unable to recover it. 00:24:28.692 [2024-07-12 11:28:54.705537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.692 [2024-07-12 11:28:54.705563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.692 qpair failed and we were unable to recover it. 00:24:28.692 [2024-07-12 11:28:54.705649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.692 [2024-07-12 11:28:54.705678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.692 qpair failed and we were unable to recover it. 00:24:28.692 [2024-07-12 11:28:54.705768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.692 [2024-07-12 11:28:54.705795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.692 qpair failed and we were unable to recover it. 00:24:28.692 [2024-07-12 11:28:54.705907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.692 [2024-07-12 11:28:54.705935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.692 qpair failed and we were unable to recover it. 00:24:28.692 [2024-07-12 11:28:54.706016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.692 [2024-07-12 11:28:54.706042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.692 qpair failed and we were unable to recover it. 00:24:28.692 [2024-07-12 11:28:54.706130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.692 [2024-07-12 11:28:54.706158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.692 qpair failed and we were unable to recover it. 00:24:28.692 [2024-07-12 11:28:54.706239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.692 [2024-07-12 11:28:54.706265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.692 qpair failed and we were unable to recover it. 00:24:28.692 [2024-07-12 11:28:54.706369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.692 [2024-07-12 11:28:54.706396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.692 qpair failed and we were unable to recover it. 00:24:28.692 [2024-07-12 11:28:54.706476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.692 [2024-07-12 11:28:54.706504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.692 qpair failed and we were unable to recover it. 00:24:28.692 [2024-07-12 11:28:54.706595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.692 [2024-07-12 11:28:54.706624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.692 qpair failed and we were unable to recover it. 00:24:28.692 [2024-07-12 11:28:54.706740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.692 [2024-07-12 11:28:54.706769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.692 qpair failed and we were unable to recover it. 00:24:28.692 [2024-07-12 11:28:54.706857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.692 [2024-07-12 11:28:54.706894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.692 qpair failed and we were unable to recover it. 00:24:28.692 [2024-07-12 11:28:54.706979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.692 [2024-07-12 11:28:54.707005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.692 qpair failed and we were unable to recover it. 00:24:28.692 [2024-07-12 11:28:54.707098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.692 [2024-07-12 11:28:54.707124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.692 qpair failed and we were unable to recover it. 00:24:28.692 [2024-07-12 11:28:54.707202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.692 [2024-07-12 11:28:54.707229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.692 qpair failed and we were unable to recover it. 00:24:28.692 [2024-07-12 11:28:54.707308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.692 [2024-07-12 11:28:54.707334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.692 qpair failed and we were unable to recover it. 00:24:28.692 [2024-07-12 11:28:54.707416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.692 [2024-07-12 11:28:54.707445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.692 qpair failed and we were unable to recover it. 00:24:28.692 [2024-07-12 11:28:54.707538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.692 [2024-07-12 11:28:54.707567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.692 qpair failed and we were unable to recover it. 00:24:28.693 [2024-07-12 11:28:54.707662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.693 [2024-07-12 11:28:54.707702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.693 qpair failed and we were unable to recover it. 00:24:28.693 [2024-07-12 11:28:54.707814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.693 [2024-07-12 11:28:54.707841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.693 qpair failed and we were unable to recover it. 00:24:28.693 [2024-07-12 11:28:54.707936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.693 [2024-07-12 11:28:54.707963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.693 qpair failed and we were unable to recover it. 00:24:28.693 [2024-07-12 11:28:54.708054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.693 [2024-07-12 11:28:54.708080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.693 qpair failed and we were unable to recover it. 00:24:28.693 [2024-07-12 11:28:54.708162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.693 [2024-07-12 11:28:54.708188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.693 qpair failed and we were unable to recover it. 00:24:28.693 [2024-07-12 11:28:54.708271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.693 [2024-07-12 11:28:54.708297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.693 qpair failed and we were unable to recover it. 00:24:28.693 [2024-07-12 11:28:54.708402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.693 [2024-07-12 11:28:54.708427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.693 qpair failed and we were unable to recover it. 00:24:28.693 [2024-07-12 11:28:54.708505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.693 [2024-07-12 11:28:54.708531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.693 qpair failed and we were unable to recover it. 00:24:28.693 [2024-07-12 11:28:54.708621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.693 [2024-07-12 11:28:54.708666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.693 qpair failed and we were unable to recover it. 00:24:28.693 [2024-07-12 11:28:54.708781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.693 [2024-07-12 11:28:54.708809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.693 qpair failed and we were unable to recover it. 00:24:28.693 [2024-07-12 11:28:54.708931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.693 [2024-07-12 11:28:54.708960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.693 qpair failed and we were unable to recover it. 00:24:28.693 [2024-07-12 11:28:54.709044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.693 [2024-07-12 11:28:54.709071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.693 qpair failed and we were unable to recover it. 00:24:28.693 [2024-07-12 11:28:54.709171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.693 [2024-07-12 11:28:54.709198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.693 qpair failed and we were unable to recover it. 00:24:28.693 [2024-07-12 11:28:54.709305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.693 [2024-07-12 11:28:54.709331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.693 qpair failed and we were unable to recover it. 00:24:28.693 [2024-07-12 11:28:54.709460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.693 [2024-07-12 11:28:54.709488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.693 qpair failed and we were unable to recover it. 00:24:28.693 [2024-07-12 11:28:54.709604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.693 [2024-07-12 11:28:54.709630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.693 qpair failed and we were unable to recover it. 00:24:28.693 [2024-07-12 11:28:54.709711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.693 [2024-07-12 11:28:54.709737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.693 qpair failed and we were unable to recover it. 00:24:28.693 [2024-07-12 11:28:54.709832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.693 [2024-07-12 11:28:54.709858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.693 qpair failed and we were unable to recover it. 00:24:28.693 [2024-07-12 11:28:54.709945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.693 [2024-07-12 11:28:54.709971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.693 qpair failed and we were unable to recover it. 00:24:28.693 [2024-07-12 11:28:54.710056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.693 [2024-07-12 11:28:54.710082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.693 qpair failed and we were unable to recover it. 00:24:28.693 [2024-07-12 11:28:54.710161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.693 [2024-07-12 11:28:54.710187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.693 qpair failed and we were unable to recover it. 00:24:28.693 [2024-07-12 11:28:54.710295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.693 [2024-07-12 11:28:54.710321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.693 qpair failed and we were unable to recover it. 00:24:28.693 [2024-07-12 11:28:54.710414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.693 [2024-07-12 11:28:54.710440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.693 qpair failed and we were unable to recover it. 00:24:28.693 [2024-07-12 11:28:54.710532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.693 [2024-07-12 11:28:54.710561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.693 qpair failed and we were unable to recover it. 00:24:28.693 [2024-07-12 11:28:54.710649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.693 [2024-07-12 11:28:54.710676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.693 qpair failed and we were unable to recover it. 00:24:28.693 [2024-07-12 11:28:54.710759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.693 [2024-07-12 11:28:54.710786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.693 qpair failed and we were unable to recover it. 00:24:28.693 [2024-07-12 11:28:54.710883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.693 [2024-07-12 11:28:54.710910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.693 qpair failed and we were unable to recover it. 00:24:28.693 [2024-07-12 11:28:54.710999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.693 [2024-07-12 11:28:54.711027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.693 qpair failed and we were unable to recover it. 00:24:28.693 [2024-07-12 11:28:54.711112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.693 [2024-07-12 11:28:54.711138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.693 qpair failed and we were unable to recover it. 00:24:28.693 [2024-07-12 11:28:54.711244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.693 [2024-07-12 11:28:54.711270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.693 qpair failed and we were unable to recover it. 00:24:28.693 [2024-07-12 11:28:54.711355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.693 [2024-07-12 11:28:54.711380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.693 qpair failed and we were unable to recover it. 00:24:28.693 [2024-07-12 11:28:54.711497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.693 [2024-07-12 11:28:54.711523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.693 qpair failed and we were unable to recover it. 00:24:28.693 [2024-07-12 11:28:54.711604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.693 [2024-07-12 11:28:54.711631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.693 qpair failed and we were unable to recover it. 00:24:28.693 [2024-07-12 11:28:54.711721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.694 [2024-07-12 11:28:54.711761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.694 qpair failed and we were unable to recover it. 00:24:28.694 [2024-07-12 11:28:54.711889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.694 [2024-07-12 11:28:54.711930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.694 qpair failed and we were unable to recover it. 00:24:28.694 [2024-07-12 11:28:54.712020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.694 [2024-07-12 11:28:54.712053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.694 qpair failed and we were unable to recover it. 00:24:28.694 [2024-07-12 11:28:54.712145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.694 [2024-07-12 11:28:54.712173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.694 qpair failed and we were unable to recover it. 00:24:28.694 [2024-07-12 11:28:54.712287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.694 [2024-07-12 11:28:54.712315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.694 qpair failed and we were unable to recover it. 00:24:28.694 [2024-07-12 11:28:54.712401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.694 [2024-07-12 11:28:54.712426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.694 qpair failed and we were unable to recover it. 00:24:28.694 [2024-07-12 11:28:54.712511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.694 [2024-07-12 11:28:54.712538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.694 qpair failed and we were unable to recover it. 00:24:28.694 [2024-07-12 11:28:54.712628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.694 [2024-07-12 11:28:54.712658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.694 qpair failed and we were unable to recover it. 00:24:28.694 [2024-07-12 11:28:54.712804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.694 [2024-07-12 11:28:54.712834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.694 qpair failed and we were unable to recover it. 00:24:28.694 [2024-07-12 11:28:54.712931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.694 [2024-07-12 11:28:54.712959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.694 qpair failed and we were unable to recover it. 00:24:28.694 [2024-07-12 11:28:54.713047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.694 [2024-07-12 11:28:54.713074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.694 qpair failed and we were unable to recover it. 00:24:28.694 [2024-07-12 11:28:54.713153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.694 [2024-07-12 11:28:54.713180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.694 qpair failed and we were unable to recover it. 00:24:28.694 [2024-07-12 11:28:54.713263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.694 [2024-07-12 11:28:54.713288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.694 qpair failed and we were unable to recover it. 00:24:28.694 [2024-07-12 11:28:54.713369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.694 [2024-07-12 11:28:54.713395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.694 qpair failed and we were unable to recover it. 00:24:28.694 [2024-07-12 11:28:54.713481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.694 [2024-07-12 11:28:54.713510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.694 qpair failed and we were unable to recover it. 00:24:28.694 [2024-07-12 11:28:54.713592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.694 [2024-07-12 11:28:54.713620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.694 qpair failed and we were unable to recover it. 00:24:28.694 [2024-07-12 11:28:54.713719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.694 [2024-07-12 11:28:54.713747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.694 qpair failed and we were unable to recover it. 00:24:28.694 [2024-07-12 11:28:54.713832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.694 [2024-07-12 11:28:54.713858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.694 qpair failed and we were unable to recover it. 00:24:28.694 [2024-07-12 11:28:54.713948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.694 [2024-07-12 11:28:54.713974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.694 qpair failed and we were unable to recover it. 00:24:28.694 [2024-07-12 11:28:54.714056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.694 [2024-07-12 11:28:54.714082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.694 qpair failed and we were unable to recover it. 00:24:28.694 [2024-07-12 11:28:54.714161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.694 [2024-07-12 11:28:54.714189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.694 qpair failed and we were unable to recover it. 00:24:28.694 [2024-07-12 11:28:54.714270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.694 [2024-07-12 11:28:54.714296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.694 qpair failed and we were unable to recover it. 00:24:28.694 [2024-07-12 11:28:54.714437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.694 [2024-07-12 11:28:54.714465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.694 qpair failed and we were unable to recover it. 00:24:28.694 [2024-07-12 11:28:54.714574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.694 [2024-07-12 11:28:54.714601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.694 qpair failed and we were unable to recover it. 00:24:28.694 [2024-07-12 11:28:54.714715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.694 [2024-07-12 11:28:54.714743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.694 qpair failed and we were unable to recover it. 00:24:28.694 [2024-07-12 11:28:54.714849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.694 [2024-07-12 11:28:54.714885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.694 qpair failed and we were unable to recover it. 00:24:28.694 [2024-07-12 11:28:54.714975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.694 [2024-07-12 11:28:54.715003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.694 qpair failed and we were unable to recover it. 00:24:28.694 [2024-07-12 11:28:54.715116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.694 [2024-07-12 11:28:54.715143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.694 qpair failed and we were unable to recover it. 00:24:28.694 [2024-07-12 11:28:54.715220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.694 [2024-07-12 11:28:54.715248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.694 qpair failed and we were unable to recover it. 00:24:28.694 [2024-07-12 11:28:54.715367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.694 [2024-07-12 11:28:54.715395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.694 qpair failed and we were unable to recover it. 00:24:28.694 [2024-07-12 11:28:54.715475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.694 [2024-07-12 11:28:54.715502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.694 qpair failed and we were unable to recover it. 00:24:28.694 [2024-07-12 11:28:54.715613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.694 [2024-07-12 11:28:54.715638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.694 qpair failed and we were unable to recover it. 00:24:28.694 [2024-07-12 11:28:54.715723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.694 [2024-07-12 11:28:54.715759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.694 qpair failed and we were unable to recover it. 00:24:28.694 [2024-07-12 11:28:54.715859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.694 [2024-07-12 11:28:54.715895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.694 qpair failed and we were unable to recover it. 00:24:28.694 [2024-07-12 11:28:54.715979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.694 [2024-07-12 11:28:54.716005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.694 qpair failed and we were unable to recover it. 00:24:28.694 [2024-07-12 11:28:54.716080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.694 [2024-07-12 11:28:54.716106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.694 qpair failed and we were unable to recover it. 00:24:28.694 [2024-07-12 11:28:54.716186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.694 [2024-07-12 11:28:54.716212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.694 qpair failed and we were unable to recover it. 00:24:28.694 [2024-07-12 11:28:54.716292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.694 [2024-07-12 11:28:54.716320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.694 qpair failed and we were unable to recover it. 00:24:28.694 [2024-07-12 11:28:54.716400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.694 [2024-07-12 11:28:54.716427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.694 qpair failed and we were unable to recover it. 00:24:28.694 [2024-07-12 11:28:54.716515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.694 [2024-07-12 11:28:54.716541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.694 qpair failed and we were unable to recover it. 00:24:28.694 [2024-07-12 11:28:54.716651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.694 [2024-07-12 11:28:54.716677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.694 qpair failed and we were unable to recover it. 00:24:28.694 [2024-07-12 11:28:54.716785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.695 [2024-07-12 11:28:54.716811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.695 qpair failed and we were unable to recover it. 00:24:28.695 [2024-07-12 11:28:54.716938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.695 [2024-07-12 11:28:54.716984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.695 qpair failed and we were unable to recover it. 00:24:28.695 [2024-07-12 11:28:54.717076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.695 [2024-07-12 11:28:54.717104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.695 qpair failed and we were unable to recover it. 00:24:28.695 [2024-07-12 11:28:54.717220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.695 [2024-07-12 11:28:54.717246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.695 qpair failed and we were unable to recover it. 00:24:28.695 [2024-07-12 11:28:54.717360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.695 [2024-07-12 11:28:54.717386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.695 qpair failed and we were unable to recover it. 00:24:28.695 [2024-07-12 11:28:54.717472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.695 [2024-07-12 11:28:54.717498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.695 qpair failed and we were unable to recover it. 00:24:28.695 [2024-07-12 11:28:54.717577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.695 [2024-07-12 11:28:54.717606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.695 qpair failed and we were unable to recover it. 00:24:28.695 [2024-07-12 11:28:54.717716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.695 [2024-07-12 11:28:54.717743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.695 qpair failed and we were unable to recover it. 00:24:28.695 [2024-07-12 11:28:54.717828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.695 [2024-07-12 11:28:54.717855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.695 qpair failed and we were unable to recover it. 00:24:28.695 [2024-07-12 11:28:54.717981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.695 [2024-07-12 11:28:54.718008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.695 qpair failed and we were unable to recover it. 00:24:28.695 [2024-07-12 11:28:54.718094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.695 [2024-07-12 11:28:54.718121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.695 qpair failed and we were unable to recover it. 00:24:28.695 [2024-07-12 11:28:54.718247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.695 [2024-07-12 11:28:54.718274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.695 qpair failed and we were unable to recover it. 00:24:28.695 [2024-07-12 11:28:54.718349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.695 [2024-07-12 11:28:54.718376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.695 qpair failed and we were unable to recover it. 00:24:28.695 [2024-07-12 11:28:54.718484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.695 [2024-07-12 11:28:54.718510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.695 qpair failed and we were unable to recover it. 00:24:28.695 [2024-07-12 11:28:54.718610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.695 [2024-07-12 11:28:54.718650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.695 qpair failed and we were unable to recover it. 00:24:28.695 [2024-07-12 11:28:54.718769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.695 [2024-07-12 11:28:54.718797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.695 qpair failed and we were unable to recover it. 00:24:28.695 [2024-07-12 11:28:54.718879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.695 [2024-07-12 11:28:54.718906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.695 qpair failed and we were unable to recover it. 00:24:28.695 [2024-07-12 11:28:54.718988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.695 [2024-07-12 11:28:54.719015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.695 qpair failed and we were unable to recover it. 00:24:28.695 [2024-07-12 11:28:54.719094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.695 [2024-07-12 11:28:54.719120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.695 qpair failed and we were unable to recover it. 00:24:28.695 [2024-07-12 11:28:54.719202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.695 [2024-07-12 11:28:54.719228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.695 qpair failed and we were unable to recover it. 00:24:28.695 [2024-07-12 11:28:54.719303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.695 [2024-07-12 11:28:54.719330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.695 qpair failed and we were unable to recover it. 00:24:28.695 [2024-07-12 11:28:54.719430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.695 [2024-07-12 11:28:54.719458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.695 qpair failed and we were unable to recover it. 00:24:28.695 [2024-07-12 11:28:54.719571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.695 [2024-07-12 11:28:54.719599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.695 qpair failed and we were unable to recover it. 00:24:28.695 [2024-07-12 11:28:54.719703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.695 [2024-07-12 11:28:54.719730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.695 qpair failed and we were unable to recover it. 00:24:28.695 [2024-07-12 11:28:54.719805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.695 [2024-07-12 11:28:54.719832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.695 qpair failed and we were unable to recover it. 00:24:28.695 [2024-07-12 11:28:54.719919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.695 [2024-07-12 11:28:54.719947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.695 qpair failed and we were unable to recover it. 00:24:28.695 [2024-07-12 11:28:54.720055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.695 [2024-07-12 11:28:54.720082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.695 qpair failed and we were unable to recover it. 00:24:28.695 [2024-07-12 11:28:54.720200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.695 [2024-07-12 11:28:54.720226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.695 qpair failed and we were unable to recover it. 00:24:28.695 [2024-07-12 11:28:54.720305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.695 [2024-07-12 11:28:54.720336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.695 qpair failed and we were unable to recover it. 00:24:28.695 [2024-07-12 11:28:54.720421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.695 [2024-07-12 11:28:54.720450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.695 qpair failed and we were unable to recover it. 00:24:28.695 [2024-07-12 11:28:54.720556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.695 [2024-07-12 11:28:54.720583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.695 qpair failed and we were unable to recover it. 00:24:28.695 [2024-07-12 11:28:54.720695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.695 [2024-07-12 11:28:54.720721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.695 qpair failed and we were unable to recover it. 00:24:28.695 [2024-07-12 11:28:54.720797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.695 [2024-07-12 11:28:54.720823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.695 qpair failed and we were unable to recover it. 00:24:28.695 [2024-07-12 11:28:54.720916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.695 [2024-07-12 11:28:54.720944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.695 qpair failed and we were unable to recover it. 00:24:28.695 [2024-07-12 11:28:54.721043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.695 [2024-07-12 11:28:54.721082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.695 qpair failed and we were unable to recover it. 00:24:28.695 [2024-07-12 11:28:54.721206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.695 [2024-07-12 11:28:54.721233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.695 qpair failed and we were unable to recover it. 00:24:28.695 [2024-07-12 11:28:54.721311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.695 [2024-07-12 11:28:54.721338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.695 qpair failed and we were unable to recover it. 00:24:28.695 [2024-07-12 11:28:54.721476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.695 [2024-07-12 11:28:54.721502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.695 qpair failed and we were unable to recover it. 00:24:28.695 [2024-07-12 11:28:54.721577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.695 [2024-07-12 11:28:54.721602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.695 qpair failed and we were unable to recover it. 00:24:28.695 [2024-07-12 11:28:54.721721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.695 [2024-07-12 11:28:54.721748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.695 qpair failed and we were unable to recover it. 00:24:28.695 [2024-07-12 11:28:54.721835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.695 [2024-07-12 11:28:54.721863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.695 qpair failed and we were unable to recover it. 00:24:28.696 [2024-07-12 11:28:54.721959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.696 [2024-07-12 11:28:54.721985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.696 qpair failed and we were unable to recover it. 00:24:28.696 [2024-07-12 11:28:54.722072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.696 [2024-07-12 11:28:54.722099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.696 qpair failed and we were unable to recover it. 00:24:28.696 [2024-07-12 11:28:54.722181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.696 [2024-07-12 11:28:54.722206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.696 qpair failed and we were unable to recover it. 00:24:28.696 [2024-07-12 11:28:54.722342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.696 [2024-07-12 11:28:54.722368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.696 qpair failed and we were unable to recover it. 00:24:28.696 [2024-07-12 11:28:54.722453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.696 [2024-07-12 11:28:54.722480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.696 qpair failed and we were unable to recover it. 00:24:28.696 [2024-07-12 11:28:54.722557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.696 [2024-07-12 11:28:54.722583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.696 qpair failed and we were unable to recover it. 00:24:28.696 [2024-07-12 11:28:54.722666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.696 [2024-07-12 11:28:54.722695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.696 qpair failed and we were unable to recover it. 00:24:28.696 [2024-07-12 11:28:54.722775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.696 [2024-07-12 11:28:54.722802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.696 qpair failed and we were unable to recover it. 00:24:28.696 [2024-07-12 11:28:54.722923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.696 [2024-07-12 11:28:54.722952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.696 qpair failed and we were unable to recover it. 00:24:28.696 [2024-07-12 11:28:54.723063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.696 [2024-07-12 11:28:54.723090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.696 qpair failed and we were unable to recover it. 00:24:28.696 [2024-07-12 11:28:54.723175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.696 [2024-07-12 11:28:54.723201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.696 qpair failed and we were unable to recover it. 00:24:28.696 [2024-07-12 11:28:54.723281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.696 [2024-07-12 11:28:54.723308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.696 qpair failed and we were unable to recover it. 00:24:28.696 [2024-07-12 11:28:54.723388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.696 [2024-07-12 11:28:54.723417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.696 qpair failed and we were unable to recover it. 00:24:28.696 [2024-07-12 11:28:54.723501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.696 [2024-07-12 11:28:54.723529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.696 qpair failed and we were unable to recover it. 00:24:28.696 [2024-07-12 11:28:54.723639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.696 [2024-07-12 11:28:54.723667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.696 qpair failed and we were unable to recover it. 00:24:28.696 [2024-07-12 11:28:54.723746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.696 [2024-07-12 11:28:54.723773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.696 qpair failed and we were unable to recover it. 00:24:28.696 [2024-07-12 11:28:54.723914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.696 [2024-07-12 11:28:54.723942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.696 qpair failed and we were unable to recover it. 00:24:28.696 [2024-07-12 11:28:54.724023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.696 [2024-07-12 11:28:54.724049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.696 qpair failed and we were unable to recover it. 00:24:28.696 [2024-07-12 11:28:54.724131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.696 [2024-07-12 11:28:54.724158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.696 qpair failed and we were unable to recover it. 00:24:28.696 [2024-07-12 11:28:54.724264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.696 [2024-07-12 11:28:54.724291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.696 qpair failed and we were unable to recover it. 00:24:28.696 [2024-07-12 11:28:54.724370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.696 [2024-07-12 11:28:54.724398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.696 qpair failed and we were unable to recover it. 00:24:28.696 [2024-07-12 11:28:54.724521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.696 [2024-07-12 11:28:54.724549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.696 qpair failed and we were unable to recover it. 00:24:28.696 [2024-07-12 11:28:54.724661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.696 [2024-07-12 11:28:54.724701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.696 qpair failed and we were unable to recover it. 00:24:28.696 [2024-07-12 11:28:54.724785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.696 [2024-07-12 11:28:54.724812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.696 qpair failed and we were unable to recover it. 00:24:28.696 [2024-07-12 11:28:54.724904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.696 [2024-07-12 11:28:54.724931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.696 qpair failed and we were unable to recover it. 00:24:28.696 [2024-07-12 11:28:54.725023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.696 [2024-07-12 11:28:54.725049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.696 qpair failed and we were unable to recover it. 00:24:28.696 [2024-07-12 11:28:54.725125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.696 [2024-07-12 11:28:54.725152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.696 qpair failed and we were unable to recover it. 00:24:28.696 [2024-07-12 11:28:54.725231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.696 [2024-07-12 11:28:54.725262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.696 qpair failed and we were unable to recover it. 00:24:28.696 [2024-07-12 11:28:54.725339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.696 [2024-07-12 11:28:54.725365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.696 qpair failed and we were unable to recover it. 00:24:28.696 [2024-07-12 11:28:54.725457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.696 [2024-07-12 11:28:54.725486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.696 qpair failed and we were unable to recover it. 00:24:28.696 [2024-07-12 11:28:54.725567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.696 [2024-07-12 11:28:54.725595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.696 qpair failed and we were unable to recover it. 00:24:28.696 [2024-07-12 11:28:54.725682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.696 [2024-07-12 11:28:54.725710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.696 qpair failed and we were unable to recover it. 00:24:28.696 [2024-07-12 11:28:54.725792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.696 [2024-07-12 11:28:54.725819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.696 qpair failed and we were unable to recover it. 00:24:28.696 [2024-07-12 11:28:54.725939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.696 [2024-07-12 11:28:54.725966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.696 qpair failed and we were unable to recover it. 00:24:28.696 [2024-07-12 11:28:54.726045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.696 [2024-07-12 11:28:54.726071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.696 qpair failed and we were unable to recover it. 00:24:28.696 [2024-07-12 11:28:54.726153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.696 [2024-07-12 11:28:54.726179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.696 qpair failed and we were unable to recover it. 00:24:28.696 [2024-07-12 11:28:54.726259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.696 [2024-07-12 11:28:54.726285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.696 qpair failed and we were unable to recover it. 00:24:28.696 [2024-07-12 11:28:54.726364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.696 [2024-07-12 11:28:54.726391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.696 qpair failed and we were unable to recover it. 00:24:28.696 [2024-07-12 11:28:54.726498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.696 [2024-07-12 11:28:54.726525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.696 qpair failed and we were unable to recover it. 00:24:28.696 [2024-07-12 11:28:54.726603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.696 [2024-07-12 11:28:54.726632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.696 qpair failed and we were unable to recover it. 00:24:28.696 [2024-07-12 11:28:54.726765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.696 [2024-07-12 11:28:54.726805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-12 11:28:54.726903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-12 11:28:54.726932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-12 11:28:54.727035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-12 11:28:54.727062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-12 11:28:54.727198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-12 11:28:54.727225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-12 11:28:54.727339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-12 11:28:54.727366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-12 11:28:54.727450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-12 11:28:54.727477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-12 11:28:54.727561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-12 11:28:54.727588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-12 11:28:54.727696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-12 11:28:54.727723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-12 11:28:54.727801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-12 11:28:54.727828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-12 11:28:54.727947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-12 11:28:54.727974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-12 11:28:54.728059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-12 11:28:54.728085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-12 11:28:54.728168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-12 11:28:54.728194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-12 11:28:54.728308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-12 11:28:54.728334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-12 11:28:54.728412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-12 11:28:54.728438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-12 11:28:54.728519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-12 11:28:54.728550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-12 11:28:54.728633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-12 11:28:54.728661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-12 11:28:54.728743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-12 11:28:54.728769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-12 11:28:54.728850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-12 11:28:54.728885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-12 11:28:54.728965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-12 11:28:54.728992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-12 11:28:54.729102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-12 11:28:54.729128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-12 11:28:54.729215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-12 11:28:54.729243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-12 11:28:54.729323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-12 11:28:54.729350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-12 11:28:54.729489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-12 11:28:54.729515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-12 11:28:54.729605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-12 11:28:54.729631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-12 11:28:54.729740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-12 11:28:54.729768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-12 11:28:54.729847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-12 11:28:54.729880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-12 11:28:54.729971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-12 11:28:54.729996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-12 11:28:54.730079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-12 11:28:54.730105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-12 11:28:54.730192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-12 11:28:54.730218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-12 11:28:54.730302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-12 11:28:54.730327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-12 11:28:54.730434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-12 11:28:54.730460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-12 11:28:54.730543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-12 11:28:54.730568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-12 11:28:54.730654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-12 11:28:54.730681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-12 11:28:54.730761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-12 11:28:54.730789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-12 11:28:54.730921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-12 11:28:54.730961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-12 11:28:54.731049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-12 11:28:54.731078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-12 11:28:54.731165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-12 11:28:54.731192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-12 11:28:54.731308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-12 11:28:54.731335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-12 11:28:54.731443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-12 11:28:54.731470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.697 qpair failed and we were unable to recover it. 00:24:28.697 [2024-07-12 11:28:54.731563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.697 [2024-07-12 11:28:54.731591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-12 11:28:54.731699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-12 11:28:54.731726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-12 11:28:54.731874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-12 11:28:54.731905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-12 11:28:54.731988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-12 11:28:54.732014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-12 11:28:54.732091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-12 11:28:54.732117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-12 11:28:54.732193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-12 11:28:54.732219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-12 11:28:54.732329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-12 11:28:54.732355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-12 11:28:54.732443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-12 11:28:54.732483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-12 11:28:54.732641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-12 11:28:54.732670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-12 11:28:54.732779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-12 11:28:54.732805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-12 11:28:54.732889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-12 11:28:54.732918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-12 11:28:54.733009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-12 11:28:54.733036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-12 11:28:54.733178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-12 11:28:54.733204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-12 11:28:54.733290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-12 11:28:54.733316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-12 11:28:54.733400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-12 11:28:54.733427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-12 11:28:54.733546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-12 11:28:54.733575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-12 11:28:54.733664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-12 11:28:54.733692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-12 11:28:54.733771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-12 11:28:54.733797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-12 11:28:54.733884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-12 11:28:54.733911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-12 11:28:54.733996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-12 11:28:54.734022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-12 11:28:54.734114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-12 11:28:54.734140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-12 11:28:54.734218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-12 11:28:54.734243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-12 11:28:54.734364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-12 11:28:54.734393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-12 11:28:54.734479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-12 11:28:54.734506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-12 11:28:54.734618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-12 11:28:54.734645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-12 11:28:54.734728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-12 11:28:54.734754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-12 11:28:54.734844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-12 11:28:54.734894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-12 11:28:54.734995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-12 11:28:54.735022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-12 11:28:54.735108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-12 11:28:54.735135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-12 11:28:54.735253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-12 11:28:54.735281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-12 11:28:54.735368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-12 11:28:54.735397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-12 11:28:54.735479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-12 11:28:54.735507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-12 11:28:54.735619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-12 11:28:54.735645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-12 11:28:54.735731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-12 11:28:54.735757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-12 11:28:54.735841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-12 11:28:54.735876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-12 11:28:54.735959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-12 11:28:54.735985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-12 11:28:54.736062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-12 11:28:54.736088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-12 11:28:54.736209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.698 [2024-07-12 11:28:54.736236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.698 qpair failed and we were unable to recover it. 00:24:28.698 [2024-07-12 11:28:54.736313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-12 11:28:54.736339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-12 11:28:54.736417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-12 11:28:54.736445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-12 11:28:54.736527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-12 11:28:54.736555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-12 11:28:54.736672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-12 11:28:54.736700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-12 11:28:54.736785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-12 11:28:54.736812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-12 11:28:54.736905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-12 11:28:54.736932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-12 11:28:54.737026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-12 11:28:54.737054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-12 11:28:54.737143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-12 11:28:54.737171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-12 11:28:54.737276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-12 11:28:54.737303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-12 11:28:54.737414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-12 11:28:54.737441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-12 11:28:54.737531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-12 11:28:54.737557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-12 11:28:54.737643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-12 11:28:54.737672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-12 11:28:54.737753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-12 11:28:54.737780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-12 11:28:54.737861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-12 11:28:54.737895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-12 11:28:54.737983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-12 11:28:54.738009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-12 11:28:54.738097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-12 11:28:54.738124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-12 11:28:54.738203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-12 11:28:54.738230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-12 11:28:54.738304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-12 11:28:54.738330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-12 11:28:54.738453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-12 11:28:54.738494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-12 11:28:54.738587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-12 11:28:54.738616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-12 11:28:54.738699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-12 11:28:54.738727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-12 11:28:54.738839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-12 11:28:54.738872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-12 11:28:54.738957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-12 11:28:54.738983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-12 11:28:54.739093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-12 11:28:54.739119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-12 11:28:54.739235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-12 11:28:54.739261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-12 11:28:54.739337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-12 11:28:54.739363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-12 11:28:54.739443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-12 11:28:54.739471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-12 11:28:54.739555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-12 11:28:54.739584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-12 11:28:54.739672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-12 11:28:54.739702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-12 11:28:54.739784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-12 11:28:54.739811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-12 11:28:54.739931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-12 11:28:54.739959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-12 11:28:54.740042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-12 11:28:54.740068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-12 11:28:54.740165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-12 11:28:54.740193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-12 11:28:54.740302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-12 11:28:54.740328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-12 11:28:54.740415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-12 11:28:54.740444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-12 11:28:54.740530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-12 11:28:54.740556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-12 11:28:54.740637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-12 11:28:54.740664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-12 11:28:54.740748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-12 11:28:54.740775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-12 11:28:54.740894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-12 11:28:54.740922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-12 11:28:54.741010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-12 11:28:54.741036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-12 11:28:54.741125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.699 [2024-07-12 11:28:54.741152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.699 qpair failed and we were unable to recover it. 00:24:28.699 [2024-07-12 11:28:54.741233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-12 11:28:54.741260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-12 11:28:54.741352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-12 11:28:54.741382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-12 11:28:54.741467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-12 11:28:54.741494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-12 11:28:54.741578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-12 11:28:54.741606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-12 11:28:54.741722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-12 11:28:54.741749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-12 11:28:54.741829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-12 11:28:54.741855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-12 11:28:54.741946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-12 11:28:54.741973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-12 11:28:54.742084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-12 11:28:54.742112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-12 11:28:54.742253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-12 11:28:54.742280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-12 11:28:54.742365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-12 11:28:54.742391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-12 11:28:54.742472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-12 11:28:54.742499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-12 11:28:54.742610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-12 11:28:54.742639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-12 11:28:54.742729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-12 11:28:54.742769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-12 11:28:54.742851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-12 11:28:54.742886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-12 11:28:54.742974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-12 11:28:54.743001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-12 11:28:54.743089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-12 11:28:54.743116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-12 11:28:54.743202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-12 11:28:54.743229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-12 11:28:54.743304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-12 11:28:54.743335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-12 11:28:54.743447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-12 11:28:54.743474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-12 11:28:54.743554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-12 11:28:54.743580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-12 11:28:54.743669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-12 11:28:54.743698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-12 11:28:54.743831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-12 11:28:54.743859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-12 11:28:54.743951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-12 11:28:54.743977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-12 11:28:54.744066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-12 11:28:54.744093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-12 11:28:54.744180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-12 11:28:54.744206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-12 11:28:54.744282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-12 11:28:54.744307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-12 11:28:54.744414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-12 11:28:54.744440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-12 11:28:54.744514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-12 11:28:54.744540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-12 11:28:54.744626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-12 11:28:54.744659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-12 11:28:54.744781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-12 11:28:54.744809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-12 11:28:54.744938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-12 11:28:54.744967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-12 11:28:54.745052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-12 11:28:54.745079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-12 11:28:54.745163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-12 11:28:54.745190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-12 11:28:54.745275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-12 11:28:54.745302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-12 11:28:54.745441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.700 [2024-07-12 11:28:54.745467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:28.700 qpair failed and we were unable to recover it. 00:24:28.700 [2024-07-12 11:28:54.745561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.004 [2024-07-12 11:28:54.745588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.004 qpair failed and we were unable to recover it. 00:24:29.004 [2024-07-12 11:28:54.745676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.004 [2024-07-12 11:28:54.745702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.004 qpair failed and we were unable to recover it. 00:24:29.004 [2024-07-12 11:28:54.745812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.004 [2024-07-12 11:28:54.745840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.004 qpair failed and we were unable to recover it. 00:24:29.004 [2024-07-12 11:28:54.745941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.004 [2024-07-12 11:28:54.745972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.004 qpair failed and we were unable to recover it. 00:24:29.004 [2024-07-12 11:28:54.746070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.004 [2024-07-12 11:28:54.746099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.004 qpair failed and we were unable to recover it. 00:24:29.004 [2024-07-12 11:28:54.746181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.004 [2024-07-12 11:28:54.746208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.004 qpair failed and we were unable to recover it. 00:24:29.004 [2024-07-12 11:28:54.746290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.004 [2024-07-12 11:28:54.746317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.004 qpair failed and we were unable to recover it. 00:24:29.004 [2024-07-12 11:28:54.746397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.004 [2024-07-12 11:28:54.746424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.004 qpair failed and we were unable to recover it. 00:24:29.004 [2024-07-12 11:28:54.746500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.004 [2024-07-12 11:28:54.746526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.004 qpair failed and we were unable to recover it. 00:24:29.004 [2024-07-12 11:28:54.746614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.004 [2024-07-12 11:28:54.746642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.004 qpair failed and we were unable to recover it. 00:24:29.004 [2024-07-12 11:28:54.746752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.005 [2024-07-12 11:28:54.746778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.005 qpair failed and we were unable to recover it. 00:24:29.005 [2024-07-12 11:28:54.746897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.005 [2024-07-12 11:28:54.746925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.005 qpair failed and we were unable to recover it. 00:24:29.005 [2024-07-12 11:28:54.747004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.005 [2024-07-12 11:28:54.747031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.005 qpair failed and we were unable to recover it. 00:24:29.005 [2024-07-12 11:28:54.747121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.005 [2024-07-12 11:28:54.747148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.005 qpair failed and we were unable to recover it. 00:24:29.005 [2024-07-12 11:28:54.747223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.005 [2024-07-12 11:28:54.747250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.005 qpair failed and we were unable to recover it. 00:24:29.005 [2024-07-12 11:28:54.747337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.005 [2024-07-12 11:28:54.747366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.005 qpair failed and we were unable to recover it. 00:24:29.005 [2024-07-12 11:28:54.747454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.005 [2024-07-12 11:28:54.747481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.005 qpair failed and we were unable to recover it. 00:24:29.005 [2024-07-12 11:28:54.747558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.005 [2024-07-12 11:28:54.747585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.005 qpair failed and we were unable to recover it. 00:24:29.005 [2024-07-12 11:28:54.747673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.005 [2024-07-12 11:28:54.747701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.005 qpair failed and we were unable to recover it. 00:24:29.005 [2024-07-12 11:28:54.747813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.005 [2024-07-12 11:28:54.747842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.005 qpair failed and we were unable to recover it. 00:24:29.005 [2024-07-12 11:28:54.747934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.005 [2024-07-12 11:28:54.747962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.005 qpair failed and we were unable to recover it. 00:24:29.005 [2024-07-12 11:28:54.748048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.005 [2024-07-12 11:28:54.748080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.005 qpair failed and we were unable to recover it. 00:24:29.005 [2024-07-12 11:28:54.748170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.005 [2024-07-12 11:28:54.748196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.005 qpair failed and we were unable to recover it. 00:24:29.005 [2024-07-12 11:28:54.748318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.005 [2024-07-12 11:28:54.748345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.005 qpair failed and we were unable to recover it. 00:24:29.005 [2024-07-12 11:28:54.748439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.005 [2024-07-12 11:28:54.748466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.005 qpair failed and we were unable to recover it. 00:24:29.005 [2024-07-12 11:28:54.748552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.005 [2024-07-12 11:28:54.748580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.005 qpair failed and we were unable to recover it. 00:24:29.005 [2024-07-12 11:28:54.748659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.005 [2024-07-12 11:28:54.748686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.005 qpair failed and we were unable to recover it. 00:24:29.005 [2024-07-12 11:28:54.748765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.005 [2024-07-12 11:28:54.748791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.005 qpair failed and we were unable to recover it. 00:24:29.005 [2024-07-12 11:28:54.748907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.005 [2024-07-12 11:28:54.748934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.005 qpair failed and we were unable to recover it. 00:24:29.005 [2024-07-12 11:28:54.749024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.005 [2024-07-12 11:28:54.749051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.005 qpair failed and we were unable to recover it. 00:24:29.005 [2024-07-12 11:28:54.749132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.005 [2024-07-12 11:28:54.749158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.005 qpair failed and we were unable to recover it. 00:24:29.005 [2024-07-12 11:28:54.749236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.005 [2024-07-12 11:28:54.749263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.005 qpair failed and we were unable to recover it. 00:24:29.005 [2024-07-12 11:28:54.749341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.005 [2024-07-12 11:28:54.749368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.005 qpair failed and we were unable to recover it. 00:24:29.005 [2024-07-12 11:28:54.749453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.005 [2024-07-12 11:28:54.749480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.005 qpair failed and we were unable to recover it. 00:24:29.005 [2024-07-12 11:28:54.749598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.005 [2024-07-12 11:28:54.749626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.005 qpair failed and we were unable to recover it. 00:24:29.005 [2024-07-12 11:28:54.749731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.005 [2024-07-12 11:28:54.749771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.005 qpair failed and we were unable to recover it. 00:24:29.005 [2024-07-12 11:28:54.749901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.005 [2024-07-12 11:28:54.749931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.005 qpair failed and we were unable to recover it. 00:24:29.005 [2024-07-12 11:28:54.750010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.005 [2024-07-12 11:28:54.750037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.005 qpair failed and we were unable to recover it. 00:24:29.005 [2024-07-12 11:28:54.750122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.005 [2024-07-12 11:28:54.750149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.005 qpair failed and we were unable to recover it. 00:24:29.005 [2024-07-12 11:28:54.750234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.005 [2024-07-12 11:28:54.750261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.005 qpair failed and we were unable to recover it. 00:24:29.005 [2024-07-12 11:28:54.750375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.005 [2024-07-12 11:28:54.750402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.005 qpair failed and we were unable to recover it. 00:24:29.005 [2024-07-12 11:28:54.750491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.005 [2024-07-12 11:28:54.750520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.005 qpair failed and we were unable to recover it. 00:24:29.005 [2024-07-12 11:28:54.750606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.005 [2024-07-12 11:28:54.750633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.005 qpair failed and we were unable to recover it. 00:24:29.005 [2024-07-12 11:28:54.750724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.005 [2024-07-12 11:28:54.750752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.005 qpair failed and we were unable to recover it. 00:24:29.005 [2024-07-12 11:28:54.750835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.005 [2024-07-12 11:28:54.750862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.005 qpair failed and we were unable to recover it. 00:24:29.005 [2024-07-12 11:28:54.750953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.005 [2024-07-12 11:28:54.750980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.005 qpair failed and we were unable to recover it. 00:24:29.005 [2024-07-12 11:28:54.751076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.005 [2024-07-12 11:28:54.751102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.005 qpair failed and we were unable to recover it. 00:24:29.005 [2024-07-12 11:28:54.751186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.005 [2024-07-12 11:28:54.751214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.005 qpair failed and we were unable to recover it. 00:24:29.005 [2024-07-12 11:28:54.751298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.005 [2024-07-12 11:28:54.751324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.005 qpair failed and we were unable to recover it. 00:24:29.005 [2024-07-12 11:28:54.751465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.005 [2024-07-12 11:28:54.751498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.005 qpair failed and we were unable to recover it. 00:24:29.005 [2024-07-12 11:28:54.751578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.005 [2024-07-12 11:28:54.751605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.006 qpair failed and we were unable to recover it. 00:24:29.006 [2024-07-12 11:28:54.751692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.006 [2024-07-12 11:28:54.751719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.006 qpair failed and we were unable to recover it. 00:24:29.006 [2024-07-12 11:28:54.751803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.006 [2024-07-12 11:28:54.751831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.006 qpair failed and we were unable to recover it. 00:24:29.006 [2024-07-12 11:28:54.751923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.006 [2024-07-12 11:28:54.751950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.006 qpair failed and we were unable to recover it. 00:24:29.006 [2024-07-12 11:28:54.752047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.006 [2024-07-12 11:28:54.752073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.006 qpair failed and we were unable to recover it. 00:24:29.006 [2024-07-12 11:28:54.752156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.006 [2024-07-12 11:28:54.752183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.006 qpair failed and we were unable to recover it. 00:24:29.006 [2024-07-12 11:28:54.752257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.006 [2024-07-12 11:28:54.752283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.006 qpair failed and we were unable to recover it. 00:24:29.006 [2024-07-12 11:28:54.752361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.006 [2024-07-12 11:28:54.752388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.006 qpair failed and we were unable to recover it. 00:24:29.006 [2024-07-12 11:28:54.752501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.006 [2024-07-12 11:28:54.752529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.006 qpair failed and we were unable to recover it. 00:24:29.006 [2024-07-12 11:28:54.752615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.006 [2024-07-12 11:28:54.752643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.006 qpair failed and we were unable to recover it. 00:24:29.006 [2024-07-12 11:28:54.752721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.006 [2024-07-12 11:28:54.752748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.006 qpair failed and we were unable to recover it. 00:24:29.006 [2024-07-12 11:28:54.752858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.006 [2024-07-12 11:28:54.752891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.006 qpair failed and we were unable to recover it. 00:24:29.006 [2024-07-12 11:28:54.752977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.006 [2024-07-12 11:28:54.753003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.006 qpair failed and we were unable to recover it. 00:24:29.006 [2024-07-12 11:28:54.753086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.006 [2024-07-12 11:28:54.753112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.006 qpair failed and we were unable to recover it. 00:24:29.006 [2024-07-12 11:28:54.753220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.006 [2024-07-12 11:28:54.753246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.006 qpair failed and we were unable to recover it. 00:24:29.006 [2024-07-12 11:28:54.753333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.006 [2024-07-12 11:28:54.753362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.006 qpair failed and we were unable to recover it. 00:24:29.006 [2024-07-12 11:28:54.753443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.006 [2024-07-12 11:28:54.753470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.006 qpair failed and we were unable to recover it. 00:24:29.006 [2024-07-12 11:28:54.753581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.006 [2024-07-12 11:28:54.753610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.006 qpair failed and we were unable to recover it. 00:24:29.006 [2024-07-12 11:28:54.753690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.006 [2024-07-12 11:28:54.753717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.006 qpair failed and we were unable to recover it. 00:24:29.006 [2024-07-12 11:28:54.753801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.006 [2024-07-12 11:28:54.753828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.006 qpair failed and we were unable to recover it. 00:24:29.006 [2024-07-12 11:28:54.753921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.006 [2024-07-12 11:28:54.753948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.006 qpair failed and we were unable to recover it. 00:24:29.006 [2024-07-12 11:28:54.754028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.006 [2024-07-12 11:28:54.754055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.006 qpair failed and we were unable to recover it. 00:24:29.006 [2024-07-12 11:28:54.754164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.006 [2024-07-12 11:28:54.754190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.006 qpair failed and we were unable to recover it. 00:24:29.006 [2024-07-12 11:28:54.754279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.006 [2024-07-12 11:28:54.754305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.006 qpair failed and we were unable to recover it. 00:24:29.006 [2024-07-12 11:28:54.754387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.006 [2024-07-12 11:28:54.754413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.006 qpair failed and we were unable to recover it. 00:24:29.006 [2024-07-12 11:28:54.754490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.006 [2024-07-12 11:28:54.754517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.006 qpair failed and we were unable to recover it. 00:24:29.006 [2024-07-12 11:28:54.754604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.006 [2024-07-12 11:28:54.754632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.006 qpair failed and we were unable to recover it. 00:24:29.006 [2024-07-12 11:28:54.754779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.006 [2024-07-12 11:28:54.754807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.006 qpair failed and we were unable to recover it. 00:24:29.006 [2024-07-12 11:28:54.754914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.006 [2024-07-12 11:28:54.754953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.006 qpair failed and we were unable to recover it. 00:24:29.006 [2024-07-12 11:28:54.755042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.006 [2024-07-12 11:28:54.755069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.006 qpair failed and we were unable to recover it. 00:24:29.006 [2024-07-12 11:28:54.755184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.006 [2024-07-12 11:28:54.755210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.006 qpair failed and we were unable to recover it. 00:24:29.006 [2024-07-12 11:28:54.755286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.006 [2024-07-12 11:28:54.755312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.006 qpair failed and we were unable to recover it. 00:24:29.006 [2024-07-12 11:28:54.755390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.006 [2024-07-12 11:28:54.755418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.006 qpair failed and we were unable to recover it. 00:24:29.006 [2024-07-12 11:28:54.755500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.006 [2024-07-12 11:28:54.755526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.006 qpair failed and we were unable to recover it. 00:24:29.006 [2024-07-12 11:28:54.755636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.006 [2024-07-12 11:28:54.755662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.006 qpair failed and we were unable to recover it. 00:24:29.006 [2024-07-12 11:28:54.755783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.006 [2024-07-12 11:28:54.755809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.006 qpair failed and we were unable to recover it. 00:24:29.006 [2024-07-12 11:28:54.755896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.006 [2024-07-12 11:28:54.755925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.006 qpair failed and we were unable to recover it. 00:24:29.006 [2024-07-12 11:28:54.756007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.006 [2024-07-12 11:28:54.756033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.006 qpair failed and we were unable to recover it. 00:24:29.006 [2024-07-12 11:28:54.756117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.006 [2024-07-12 11:28:54.756145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.006 qpair failed and we were unable to recover it. 00:24:29.006 [2024-07-12 11:28:54.756256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.006 [2024-07-12 11:28:54.756287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.006 qpair failed and we were unable to recover it. 00:24:29.006 [2024-07-12 11:28:54.756371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.006 [2024-07-12 11:28:54.756397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.006 qpair failed and we were unable to recover it. 00:24:29.006 [2024-07-12 11:28:54.756501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.006 [2024-07-12 11:28:54.756527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.007 qpair failed and we were unable to recover it. 00:24:29.007 [2024-07-12 11:28:54.756669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.007 [2024-07-12 11:28:54.756697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.007 qpair failed and we were unable to recover it. 00:24:29.007 [2024-07-12 11:28:54.756777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.007 [2024-07-12 11:28:54.756803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.007 qpair failed and we were unable to recover it. 00:24:29.007 [2024-07-12 11:28:54.756895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.007 [2024-07-12 11:28:54.756922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.007 qpair failed and we were unable to recover it. 00:24:29.007 [2024-07-12 11:28:54.757031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.007 [2024-07-12 11:28:54.757057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.007 qpair failed and we were unable to recover it. 00:24:29.007 [2024-07-12 11:28:54.757149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.007 [2024-07-12 11:28:54.757181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.007 qpair failed and we were unable to recover it. 00:24:29.007 [2024-07-12 11:28:54.757306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.007 [2024-07-12 11:28:54.757347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.007 qpair failed and we were unable to recover it. 00:24:29.007 [2024-07-12 11:28:54.757438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.007 [2024-07-12 11:28:54.757465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.007 qpair failed and we were unable to recover it. 00:24:29.007 [2024-07-12 11:28:54.757545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.007 [2024-07-12 11:28:54.757572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.007 qpair failed and we were unable to recover it. 00:24:29.007 [2024-07-12 11:28:54.757652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.007 [2024-07-12 11:28:54.757679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.007 qpair failed and we were unable to recover it. 00:24:29.007 [2024-07-12 11:28:54.757783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.007 [2024-07-12 11:28:54.757809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.007 qpair failed and we were unable to recover it. 00:24:29.007 [2024-07-12 11:28:54.757926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.007 [2024-07-12 11:28:54.757954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.007 qpair failed and we were unable to recover it. 00:24:29.007 [2024-07-12 11:28:54.758042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.007 [2024-07-12 11:28:54.758070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.007 qpair failed and we were unable to recover it. 00:24:29.007 [2024-07-12 11:28:54.758188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.007 [2024-07-12 11:28:54.758217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.007 qpair failed and we were unable to recover it. 00:24:29.007 [2024-07-12 11:28:54.758296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.007 [2024-07-12 11:28:54.758322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.007 qpair failed and we were unable to recover it. 00:24:29.007 [2024-07-12 11:28:54.758403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.007 [2024-07-12 11:28:54.758430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.007 qpair failed and we were unable to recover it. 00:24:29.007 [2024-07-12 11:28:54.758520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.007 [2024-07-12 11:28:54.758548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.007 qpair failed and we were unable to recover it. 00:24:29.007 [2024-07-12 11:28:54.758690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.007 [2024-07-12 11:28:54.758717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.007 qpair failed and we were unable to recover it. 00:24:29.007 [2024-07-12 11:28:54.758795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.007 [2024-07-12 11:28:54.758822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.007 qpair failed and we were unable to recover it. 00:24:29.007 [2024-07-12 11:28:54.758906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.007 [2024-07-12 11:28:54.758933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.007 qpair failed and we were unable to recover it. 00:24:29.007 [2024-07-12 11:28:54.759015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.007 [2024-07-12 11:28:54.759042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.007 qpair failed and we were unable to recover it. 00:24:29.007 [2024-07-12 11:28:54.759126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.007 [2024-07-12 11:28:54.759153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.007 qpair failed and we were unable to recover it. 00:24:29.007 [2024-07-12 11:28:54.759240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.007 [2024-07-12 11:28:54.759267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.007 qpair failed and we were unable to recover it. 00:24:29.007 [2024-07-12 11:28:54.759351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.007 [2024-07-12 11:28:54.759378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.007 qpair failed and we were unable to recover it. 00:24:29.007 [2024-07-12 11:28:54.759490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.007 [2024-07-12 11:28:54.759516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.007 qpair failed and we were unable to recover it. 00:24:29.007 [2024-07-12 11:28:54.759608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.007 [2024-07-12 11:28:54.759637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.007 qpair failed and we were unable to recover it. 00:24:29.007 [2024-07-12 11:28:54.759755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.007 [2024-07-12 11:28:54.759783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.007 qpair failed and we were unable to recover it. 00:24:29.007 [2024-07-12 11:28:54.759876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.007 [2024-07-12 11:28:54.759905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.007 qpair failed and we were unable to recover it. 00:24:29.007 [2024-07-12 11:28:54.760017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.007 [2024-07-12 11:28:54.760043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.007 qpair failed and we were unable to recover it. 00:24:29.007 [2024-07-12 11:28:54.760159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.007 [2024-07-12 11:28:54.760185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.007 qpair failed and we were unable to recover it. 00:24:29.007 [2024-07-12 11:28:54.760269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.007 [2024-07-12 11:28:54.760296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.007 qpair failed and we were unable to recover it. 00:24:29.007 [2024-07-12 11:28:54.760377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.007 [2024-07-12 11:28:54.760405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.007 qpair failed and we were unable to recover it. 00:24:29.007 [2024-07-12 11:28:54.760523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.007 [2024-07-12 11:28:54.760551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.007 qpair failed and we were unable to recover it. 00:24:29.007 [2024-07-12 11:28:54.760681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.007 [2024-07-12 11:28:54.760721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.007 qpair failed and we were unable to recover it. 00:24:29.007 [2024-07-12 11:28:54.760805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.007 [2024-07-12 11:28:54.760832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.007 qpair failed and we were unable to recover it. 00:24:29.007 [2024-07-12 11:28:54.761018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.007 [2024-07-12 11:28:54.761047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.007 qpair failed and we were unable to recover it. 00:24:29.007 [2024-07-12 11:28:54.761135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.007 [2024-07-12 11:28:54.761164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.007 qpair failed and we were unable to recover it. 00:24:29.007 [2024-07-12 11:28:54.761250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.007 [2024-07-12 11:28:54.761276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.007 qpair failed and we were unable to recover it. 00:24:29.007 [2024-07-12 11:28:54.761388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.007 [2024-07-12 11:28:54.761415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.007 qpair failed and we were unable to recover it. 00:24:29.007 [2024-07-12 11:28:54.761506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.007 [2024-07-12 11:28:54.761532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.007 qpair failed and we were unable to recover it. 00:24:29.007 [2024-07-12 11:28:54.761634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.007 [2024-07-12 11:28:54.761674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.007 qpair failed and we were unable to recover it. 00:24:29.007 [2024-07-12 11:28:54.761766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.008 [2024-07-12 11:28:54.761793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.008 qpair failed and we were unable to recover it. 00:24:29.008 [2024-07-12 11:28:54.761886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.008 [2024-07-12 11:28:54.761916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.008 qpair failed and we were unable to recover it. 00:24:29.008 [2024-07-12 11:28:54.761999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.008 [2024-07-12 11:28:54.762027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.008 qpair failed and we were unable to recover it. 00:24:29.008 [2024-07-12 11:28:54.762110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.008 [2024-07-12 11:28:54.762137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.008 qpair failed and we were unable to recover it. 00:24:29.008 [2024-07-12 11:28:54.762254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.008 [2024-07-12 11:28:54.762281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.008 qpair failed and we were unable to recover it. 00:24:29.008 [2024-07-12 11:28:54.762391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.008 [2024-07-12 11:28:54.762417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.008 qpair failed and we were unable to recover it. 00:24:29.008 [2024-07-12 11:28:54.762495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.008 [2024-07-12 11:28:54.762522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.008 qpair failed and we were unable to recover it. 00:24:29.008 [2024-07-12 11:28:54.762648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.008 [2024-07-12 11:28:54.762687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.008 qpair failed and we were unable to recover it. 00:24:29.008 [2024-07-12 11:28:54.762777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.008 [2024-07-12 11:28:54.762806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.008 qpair failed and we were unable to recover it. 00:24:29.008 [2024-07-12 11:28:54.762905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.008 [2024-07-12 11:28:54.762932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.008 qpair failed and we were unable to recover it. 00:24:29.008 [2024-07-12 11:28:54.763012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.008 [2024-07-12 11:28:54.763038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.008 qpair failed and we were unable to recover it. 00:24:29.008 [2024-07-12 11:28:54.763122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.008 [2024-07-12 11:28:54.763149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.008 qpair failed and we were unable to recover it. 00:24:29.008 [2024-07-12 11:28:54.763225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.008 [2024-07-12 11:28:54.763251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.008 qpair failed and we were unable to recover it. 00:24:29.008 [2024-07-12 11:28:54.763363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.008 [2024-07-12 11:28:54.763389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.008 qpair failed and we were unable to recover it. 00:24:29.008 [2024-07-12 11:28:54.763464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.008 [2024-07-12 11:28:54.763490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.008 qpair failed and we were unable to recover it. 00:24:29.008 [2024-07-12 11:28:54.763569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.008 [2024-07-12 11:28:54.763595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.008 qpair failed and we were unable to recover it. 00:24:29.008 [2024-07-12 11:28:54.763675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.008 [2024-07-12 11:28:54.763701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.008 qpair failed and we were unable to recover it. 00:24:29.008 [2024-07-12 11:28:54.763783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.008 [2024-07-12 11:28:54.763810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.008 qpair failed and we were unable to recover it. 00:24:29.008 [2024-07-12 11:28:54.763911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.008 [2024-07-12 11:28:54.763952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.008 qpair failed and we were unable to recover it. 00:24:29.008 [2024-07-12 11:28:54.764080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.008 [2024-07-12 11:28:54.764109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.008 qpair failed and we were unable to recover it. 00:24:29.008 [2024-07-12 11:28:54.764225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.008 [2024-07-12 11:28:54.764253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.008 qpair failed and we were unable to recover it. 00:24:29.008 [2024-07-12 11:28:54.764336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.008 [2024-07-12 11:28:54.764363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.008 qpair failed and we were unable to recover it. 00:24:29.008 [2024-07-12 11:28:54.764472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.008 [2024-07-12 11:28:54.764501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.008 qpair failed and we were unable to recover it. 00:24:29.008 [2024-07-12 11:28:54.764594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.008 [2024-07-12 11:28:54.764634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.008 qpair failed and we were unable to recover it. 00:24:29.008 [2024-07-12 11:28:54.764718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.008 [2024-07-12 11:28:54.764750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.008 qpair failed and we were unable to recover it. 00:24:29.008 [2024-07-12 11:28:54.764831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.008 [2024-07-12 11:28:54.764858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.008 qpair failed and we were unable to recover it. 00:24:29.008 [2024-07-12 11:28:54.764948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.008 [2024-07-12 11:28:54.764974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.008 qpair failed and we were unable to recover it. 00:24:29.008 [2024-07-12 11:28:54.765060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.008 [2024-07-12 11:28:54.765086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.008 qpair failed and we were unable to recover it. 00:24:29.008 [2024-07-12 11:28:54.765197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.008 [2024-07-12 11:28:54.765221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.008 qpair failed and we were unable to recover it. 00:24:29.008 [2024-07-12 11:28:54.765337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.008 [2024-07-12 11:28:54.765364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.008 qpair failed and we were unable to recover it. 00:24:29.008 [2024-07-12 11:28:54.765452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.008 [2024-07-12 11:28:54.765478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.008 qpair failed and we were unable to recover it. 00:24:29.008 [2024-07-12 11:28:54.765561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.008 [2024-07-12 11:28:54.765588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.008 qpair failed and we were unable to recover it. 00:24:29.008 [2024-07-12 11:28:54.765727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.008 [2024-07-12 11:28:54.765752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.008 qpair failed and we were unable to recover it. 00:24:29.008 [2024-07-12 11:28:54.765859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.008 [2024-07-12 11:28:54.765889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.008 qpair failed and we were unable to recover it. 00:24:29.008 [2024-07-12 11:28:54.766001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.008 [2024-07-12 11:28:54.766025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.008 qpair failed and we were unable to recover it. 00:24:29.008 [2024-07-12 11:28:54.766108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.008 [2024-07-12 11:28:54.766133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.008 qpair failed and we were unable to recover it. 00:24:29.008 [2024-07-12 11:28:54.766221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.008 [2024-07-12 11:28:54.766246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.008 qpair failed and we were unable to recover it. 00:24:29.008 [2024-07-12 11:28:54.766321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.008 [2024-07-12 11:28:54.766346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.009 qpair failed and we were unable to recover it. 00:24:29.009 [2024-07-12 11:28:54.766441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.009 [2024-07-12 11:28:54.766467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.009 qpair failed and we were unable to recover it. 00:24:29.009 [2024-07-12 11:28:54.766563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.009 [2024-07-12 11:28:54.766600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.009 qpair failed and we were unable to recover it. 00:24:29.009 [2024-07-12 11:28:54.766722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.009 [2024-07-12 11:28:54.766749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.009 qpair failed and we were unable to recover it. 00:24:29.009 [2024-07-12 11:28:54.766836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.009 [2024-07-12 11:28:54.766863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.009 qpair failed and we were unable to recover it. 00:24:29.009 [2024-07-12 11:28:54.766952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.009 [2024-07-12 11:28:54.766978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.009 qpair failed and we were unable to recover it. 00:24:29.009 [2024-07-12 11:28:54.767072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.009 [2024-07-12 11:28:54.767098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.009 qpair failed and we were unable to recover it. 00:24:29.009 [2024-07-12 11:28:54.767177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.009 [2024-07-12 11:28:54.767201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.009 qpair failed and we were unable to recover it. 00:24:29.009 [2024-07-12 11:28:54.767312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.009 [2024-07-12 11:28:54.767338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.009 qpair failed and we were unable to recover it. 00:24:29.009 [2024-07-12 11:28:54.767425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.009 [2024-07-12 11:28:54.767450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.009 qpair failed and we were unable to recover it. 00:24:29.009 [2024-07-12 11:28:54.767530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.009 [2024-07-12 11:28:54.767555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.009 qpair failed and we were unable to recover it. 00:24:29.009 [2024-07-12 11:28:54.767666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.009 [2024-07-12 11:28:54.767691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.009 qpair failed and we were unable to recover it. 00:24:29.009 [2024-07-12 11:28:54.767788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.009 [2024-07-12 11:28:54.767826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.009 qpair failed and we were unable to recover it. 00:24:29.009 [2024-07-12 11:28:54.767924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.009 [2024-07-12 11:28:54.767950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.009 qpair failed and we were unable to recover it. 00:24:29.009 [2024-07-12 11:28:54.768050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.009 [2024-07-12 11:28:54.768077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.009 qpair failed and we were unable to recover it. 00:24:29.009 [2024-07-12 11:28:54.768171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.009 [2024-07-12 11:28:54.768196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.009 qpair failed and we were unable to recover it. 00:24:29.009 [2024-07-12 11:28:54.768305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.009 [2024-07-12 11:28:54.768329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.009 qpair failed and we were unable to recover it. 00:24:29.009 [2024-07-12 11:28:54.768443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.009 [2024-07-12 11:28:54.768468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.009 qpair failed and we were unable to recover it. 00:24:29.009 [2024-07-12 11:28:54.768552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.009 [2024-07-12 11:28:54.768577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.009 qpair failed and we were unable to recover it. 00:24:29.009 [2024-07-12 11:28:54.768652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.009 [2024-07-12 11:28:54.768676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.009 qpair failed and we were unable to recover it. 00:24:29.009 [2024-07-12 11:28:54.768770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.009 [2024-07-12 11:28:54.768808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.009 qpair failed and we were unable to recover it. 00:24:29.009 [2024-07-12 11:28:54.768898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.009 [2024-07-12 11:28:54.768925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.009 qpair failed and we were unable to recover it. 00:24:29.009 [2024-07-12 11:28:54.769010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.009 [2024-07-12 11:28:54.769036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.009 qpair failed and we were unable to recover it. 00:24:29.009 [2024-07-12 11:28:54.769135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.009 [2024-07-12 11:28:54.769160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.009 qpair failed and we were unable to recover it. 00:24:29.009 [2024-07-12 11:28:54.769237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.009 [2024-07-12 11:28:54.769262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.009 qpair failed and we were unable to recover it. 00:24:29.009 [2024-07-12 11:28:54.769351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.009 [2024-07-12 11:28:54.769376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.009 qpair failed and we were unable to recover it. 00:24:29.009 [2024-07-12 11:28:54.769465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.009 [2024-07-12 11:28:54.769492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.009 qpair failed and we were unable to recover it. 00:24:29.009 [2024-07-12 11:28:54.769596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.009 [2024-07-12 11:28:54.769641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.009 qpair failed and we were unable to recover it. 00:24:29.009 [2024-07-12 11:28:54.769732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.009 [2024-07-12 11:28:54.769762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.009 qpair failed and we were unable to recover it. 00:24:29.009 [2024-07-12 11:28:54.769850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.009 [2024-07-12 11:28:54.769886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.009 qpair failed and we were unable to recover it. 00:24:29.009 [2024-07-12 11:28:54.769981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.009 [2024-07-12 11:28:54.770008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.009 qpair failed and we were unable to recover it. 00:24:29.009 [2024-07-12 11:28:54.770091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.009 [2024-07-12 11:28:54.770117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.009 qpair failed and we were unable to recover it. 00:24:29.009 [2024-07-12 11:28:54.770196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.009 [2024-07-12 11:28:54.770222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.009 qpair failed and we were unable to recover it. 00:24:29.009 [2024-07-12 11:28:54.770308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.009 [2024-07-12 11:28:54.770335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.009 qpair failed and we were unable to recover it. 00:24:29.009 [2024-07-12 11:28:54.770420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.009 [2024-07-12 11:28:54.770446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.009 qpair failed and we were unable to recover it. 00:24:29.009 [2024-07-12 11:28:54.770527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.009 [2024-07-12 11:28:54.770553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.009 qpair failed and we were unable to recover it. 00:24:29.009 [2024-07-12 11:28:54.770635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.009 [2024-07-12 11:28:54.770661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.009 qpair failed and we were unable to recover it. 00:24:29.009 [2024-07-12 11:28:54.770767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.010 [2024-07-12 11:28:54.770807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.010 qpair failed and we were unable to recover it. 00:24:29.010 [2024-07-12 11:28:54.770905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.010 [2024-07-12 11:28:54.770935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.010 qpair failed and we were unable to recover it. 00:24:29.010 [2024-07-12 11:28:54.771020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.010 [2024-07-12 11:28:54.771049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.010 qpair failed and we were unable to recover it. 00:24:29.010 [2024-07-12 11:28:54.771135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.010 [2024-07-12 11:28:54.771162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.010 qpair failed and we were unable to recover it. 00:24:29.010 [2024-07-12 11:28:54.771247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.010 [2024-07-12 11:28:54.771273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.010 qpair failed and we were unable to recover it. 00:24:29.010 [2024-07-12 11:28:54.771350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.010 [2024-07-12 11:28:54.771376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.010 qpair failed and we were unable to recover it. 00:24:29.010 [2024-07-12 11:28:54.771465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.010 [2024-07-12 11:28:54.771493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.010 qpair failed and we were unable to recover it. 00:24:29.010 [2024-07-12 11:28:54.771590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.010 [2024-07-12 11:28:54.771630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.010 qpair failed and we were unable to recover it. 00:24:29.010 [2024-07-12 11:28:54.771714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.010 [2024-07-12 11:28:54.771742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.010 qpair failed and we were unable to recover it. 00:24:29.010 [2024-07-12 11:28:54.771857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.010 [2024-07-12 11:28:54.771899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.010 qpair failed and we were unable to recover it. 00:24:29.010 [2024-07-12 11:28:54.771986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.010 [2024-07-12 11:28:54.772013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.010 qpair failed and we were unable to recover it. 00:24:29.010 [2024-07-12 11:28:54.772106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.010 [2024-07-12 11:28:54.772133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.010 qpair failed and we were unable to recover it. 00:24:29.010 [2024-07-12 11:28:54.772216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.010 [2024-07-12 11:28:54.772244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.010 qpair failed and we were unable to recover it. 00:24:29.010 [2024-07-12 11:28:54.772363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.010 [2024-07-12 11:28:54.772391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.010 qpair failed and we were unable to recover it. 00:24:29.010 [2024-07-12 11:28:54.772470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.010 [2024-07-12 11:28:54.772496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.010 qpair failed and we were unable to recover it. 00:24:29.010 [2024-07-12 11:28:54.772574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.010 [2024-07-12 11:28:54.772600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.010 qpair failed and we were unable to recover it. 00:24:29.010 [2024-07-12 11:28:54.772674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.010 [2024-07-12 11:28:54.772700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.010 qpair failed and we were unable to recover it. 00:24:29.010 [2024-07-12 11:28:54.772800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.010 [2024-07-12 11:28:54.772828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.010 qpair failed and we were unable to recover it. 00:24:29.010 [2024-07-12 11:28:54.772916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.010 [2024-07-12 11:28:54.772946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.010 qpair failed and we were unable to recover it. 00:24:29.010 [2024-07-12 11:28:54.773038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.010 [2024-07-12 11:28:54.773066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.010 qpair failed and we were unable to recover it. 00:24:29.010 [2024-07-12 11:28:54.773150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.010 [2024-07-12 11:28:54.773177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.010 qpair failed and we were unable to recover it. 00:24:29.010 [2024-07-12 11:28:54.773259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.010 [2024-07-12 11:28:54.773286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.010 qpair failed and we were unable to recover it. 00:24:29.010 [2024-07-12 11:28:54.773383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.010 [2024-07-12 11:28:54.773411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.010 qpair failed and we were unable to recover it. 00:24:29.010 [2024-07-12 11:28:54.773498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.010 [2024-07-12 11:28:54.773526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.010 qpair failed and we were unable to recover it. 00:24:29.010 [2024-07-12 11:28:54.773617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.010 [2024-07-12 11:28:54.773645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.010 qpair failed and we were unable to recover it. 00:24:29.010 [2024-07-12 11:28:54.773760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.010 [2024-07-12 11:28:54.773787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.010 qpair failed and we were unable to recover it. 00:24:29.010 [2024-07-12 11:28:54.773874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.010 [2024-07-12 11:28:54.773901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.010 qpair failed and we were unable to recover it. 00:24:29.010 [2024-07-12 11:28:54.773985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.010 [2024-07-12 11:28:54.774011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.010 qpair failed and we were unable to recover it. 00:24:29.010 [2024-07-12 11:28:54.774140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.010 [2024-07-12 11:28:54.774166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.010 qpair failed and we were unable to recover it. 00:24:29.010 [2024-07-12 11:28:54.774280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.010 [2024-07-12 11:28:54.774308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.010 qpair failed and we were unable to recover it. 00:24:29.010 [2024-07-12 11:28:54.774392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.010 [2024-07-12 11:28:54.774419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.010 qpair failed and we were unable to recover it. 00:24:29.010 [2024-07-12 11:28:54.774501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.010 [2024-07-12 11:28:54.774527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.010 qpair failed and we were unable to recover it. 00:24:29.010 [2024-07-12 11:28:54.774607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.010 [2024-07-12 11:28:54.774633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.010 qpair failed and we were unable to recover it. 00:24:29.010 [2024-07-12 11:28:54.774707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.010 [2024-07-12 11:28:54.774733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.010 qpair failed and we were unable to recover it. 00:24:29.010 [2024-07-12 11:28:54.774842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.010 [2024-07-12 11:28:54.774875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.010 qpair failed and we were unable to recover it. 00:24:29.010 [2024-07-12 11:28:54.774965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.010 [2024-07-12 11:28:54.774992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.010 qpair failed and we were unable to recover it. 00:24:29.010 [2024-07-12 11:28:54.775085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.010 [2024-07-12 11:28:54.775114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.010 qpair failed and we were unable to recover it. 00:24:29.010 [2024-07-12 11:28:54.775199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.010 [2024-07-12 11:28:54.775226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.010 qpair failed and we were unable to recover it. 00:24:29.010 [2024-07-12 11:28:54.775312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.010 [2024-07-12 11:28:54.775339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.010 qpair failed and we were unable to recover it. 00:24:29.010 [2024-07-12 11:28:54.775450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.010 [2024-07-12 11:28:54.775476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.010 qpair failed and we were unable to recover it. 00:24:29.010 [2024-07-12 11:28:54.775555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.010 [2024-07-12 11:28:54.775582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.010 qpair failed and we were unable to recover it. 00:24:29.010 [2024-07-12 11:28:54.775661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.011 [2024-07-12 11:28:54.775688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.011 qpair failed and we were unable to recover it. 00:24:29.011 [2024-07-12 11:28:54.775771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.011 [2024-07-12 11:28:54.775799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.011 qpair failed and we were unable to recover it. 00:24:29.011 [2024-07-12 11:28:54.775882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.011 [2024-07-12 11:28:54.775912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.011 qpair failed and we were unable to recover it. 00:24:29.011 [2024-07-12 11:28:54.776020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.011 [2024-07-12 11:28:54.776060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.011 qpair failed and we were unable to recover it. 00:24:29.011 [2024-07-12 11:28:54.776146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.011 [2024-07-12 11:28:54.776173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.011 qpair failed and we were unable to recover it. 00:24:29.011 [2024-07-12 11:28:54.776266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.011 [2024-07-12 11:28:54.776294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.011 qpair failed and we were unable to recover it. 00:24:29.011 [2024-07-12 11:28:54.776383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.011 [2024-07-12 11:28:54.776409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.011 qpair failed and we were unable to recover it. 00:24:29.011 [2024-07-12 11:28:54.776493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.011 [2024-07-12 11:28:54.776520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.011 qpair failed and we were unable to recover it. 00:24:29.011 [2024-07-12 11:28:54.776614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.011 [2024-07-12 11:28:54.776640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.011 qpair failed and we were unable to recover it. 00:24:29.011 [2024-07-12 11:28:54.776721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.011 [2024-07-12 11:28:54.776747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.011 qpair failed and we were unable to recover it. 00:24:29.011 [2024-07-12 11:28:54.776831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.011 [2024-07-12 11:28:54.776859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.011 qpair failed and we were unable to recover it. 00:24:29.011 [2024-07-12 11:28:54.776949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.011 [2024-07-12 11:28:54.776975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.011 qpair failed and we were unable to recover it. 00:24:29.011 [2024-07-12 11:28:54.777051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.011 [2024-07-12 11:28:54.777077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.011 qpair failed and we were unable to recover it. 00:24:29.011 [2024-07-12 11:28:54.777156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.011 [2024-07-12 11:28:54.777182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.011 qpair failed and we were unable to recover it. 00:24:29.011 [2024-07-12 11:28:54.777256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.011 [2024-07-12 11:28:54.777282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.011 qpair failed and we were unable to recover it. 00:24:29.011 [2024-07-12 11:28:54.777358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.011 [2024-07-12 11:28:54.777384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.011 qpair failed and we were unable to recover it. 00:24:29.011 [2024-07-12 11:28:54.777470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.011 [2024-07-12 11:28:54.777501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.011 qpair failed and we were unable to recover it. 00:24:29.011 [2024-07-12 11:28:54.777595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.011 [2024-07-12 11:28:54.777624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.011 qpair failed and we were unable to recover it. 00:24:29.011 [2024-07-12 11:28:54.777714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.011 [2024-07-12 11:28:54.777743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.011 qpair failed and we were unable to recover it. 00:24:29.011 [2024-07-12 11:28:54.777861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.011 [2024-07-12 11:28:54.777898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.011 qpair failed and we were unable to recover it. 00:24:29.011 [2024-07-12 11:28:54.777987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.011 [2024-07-12 11:28:54.778014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.011 qpair failed and we were unable to recover it. 00:24:29.011 [2024-07-12 11:28:54.778103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.011 [2024-07-12 11:28:54.778130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.011 qpair failed and we were unable to recover it. 00:24:29.011 [2024-07-12 11:28:54.778246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.011 [2024-07-12 11:28:54.778272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.011 qpair failed and we were unable to recover it. 00:24:29.011 [2024-07-12 11:28:54.778361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.011 [2024-07-12 11:28:54.778388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.011 qpair failed and we were unable to recover it. 00:24:29.011 [2024-07-12 11:28:54.778469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.011 [2024-07-12 11:28:54.778498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.011 qpair failed and we were unable to recover it. 00:24:29.011 [2024-07-12 11:28:54.778585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.011 [2024-07-12 11:28:54.778611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.011 qpair failed and we were unable to recover it. 00:24:29.011 [2024-07-12 11:28:54.778694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.011 [2024-07-12 11:28:54.778721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.011 qpair failed and we were unable to recover it. 00:24:29.011 [2024-07-12 11:28:54.778801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.011 [2024-07-12 11:28:54.778827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.011 qpair failed and we were unable to recover it. 00:24:29.011 [2024-07-12 11:28:54.778917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.011 [2024-07-12 11:28:54.778944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.011 qpair failed and we were unable to recover it. 00:24:29.011 [2024-07-12 11:28:54.779025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.011 [2024-07-12 11:28:54.779052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.011 qpair failed and we were unable to recover it. 00:24:29.011 [2024-07-12 11:28:54.779145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.011 [2024-07-12 11:28:54.779171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.011 qpair failed and we were unable to recover it. 00:24:29.011 [2024-07-12 11:28:54.779272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.011 [2024-07-12 11:28:54.779298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.011 qpair failed and we were unable to recover it. 00:24:29.011 [2024-07-12 11:28:54.779379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.011 [2024-07-12 11:28:54.779405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.011 qpair failed and we were unable to recover it. 00:24:29.011 [2024-07-12 11:28:54.779513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.011 [2024-07-12 11:28:54.779539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.011 qpair failed and we were unable to recover it. 00:24:29.011 [2024-07-12 11:28:54.779616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.011 [2024-07-12 11:28:54.779641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.011 qpair failed and we were unable to recover it. 00:24:29.011 [2024-07-12 11:28:54.779723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.011 [2024-07-12 11:28:54.779749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.011 qpair failed and we were unable to recover it. 00:24:29.011 [2024-07-12 11:28:54.779835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.011 [2024-07-12 11:28:54.779861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.011 qpair failed and we were unable to recover it. 00:24:29.011 [2024-07-12 11:28:54.779967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.011 [2024-07-12 11:28:54.779993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.011 qpair failed and we were unable to recover it. 00:24:29.011 [2024-07-12 11:28:54.780095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.011 [2024-07-12 11:28:54.780134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.011 qpair failed and we were unable to recover it. 00:24:29.011 [2024-07-12 11:28:54.780227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.011 [2024-07-12 11:28:54.780254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.011 qpair failed and we were unable to recover it. 00:24:29.011 [2024-07-12 11:28:54.780383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.011 [2024-07-12 11:28:54.780409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.011 qpair failed and we were unable to recover it. 00:24:29.011 [2024-07-12 11:28:54.780493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.012 [2024-07-12 11:28:54.780519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.012 qpair failed and we were unable to recover it. 00:24:29.012 [2024-07-12 11:28:54.780605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.012 [2024-07-12 11:28:54.780634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.012 qpair failed and we were unable to recover it. 00:24:29.012 [2024-07-12 11:28:54.780729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.012 [2024-07-12 11:28:54.780761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.012 qpair failed and we were unable to recover it. 00:24:29.012 [2024-07-12 11:28:54.780847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.012 [2024-07-12 11:28:54.780881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.012 qpair failed and we were unable to recover it. 00:24:29.012 [2024-07-12 11:28:54.780974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.012 [2024-07-12 11:28:54.781002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.012 qpair failed and we were unable to recover it. 00:24:29.012 [2024-07-12 11:28:54.781090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.012 [2024-07-12 11:28:54.781117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.012 qpair failed and we were unable to recover it. 00:24:29.012 [2024-07-12 11:28:54.781232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.012 [2024-07-12 11:28:54.781260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.012 qpair failed and we were unable to recover it. 00:24:29.012 [2024-07-12 11:28:54.781344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.012 [2024-07-12 11:28:54.781370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.012 qpair failed and we were unable to recover it. 00:24:29.012 [2024-07-12 11:28:54.781451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.012 [2024-07-12 11:28:54.781477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.012 qpair failed and we were unable to recover it. 00:24:29.012 [2024-07-12 11:28:54.781608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.012 [2024-07-12 11:28:54.781635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.012 qpair failed and we were unable to recover it. 00:24:29.012 [2024-07-12 11:28:54.781727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.012 [2024-07-12 11:28:54.781767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.012 qpair failed and we were unable to recover it. 00:24:29.012 [2024-07-12 11:28:54.781878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.012 [2024-07-12 11:28:54.781922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.012 qpair failed and we were unable to recover it. 00:24:29.012 [2024-07-12 11:28:54.782010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.012 [2024-07-12 11:28:54.782037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.012 qpair failed and we were unable to recover it. 00:24:29.012 [2024-07-12 11:28:54.782121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.012 [2024-07-12 11:28:54.782147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.012 qpair failed and we were unable to recover it. 00:24:29.012 [2024-07-12 11:28:54.782230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.012 [2024-07-12 11:28:54.782256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.012 qpair failed and we were unable to recover it. 00:24:29.012 [2024-07-12 11:28:54.782342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.012 [2024-07-12 11:28:54.782369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.012 qpair failed and we were unable to recover it. 00:24:29.012 [2024-07-12 11:28:54.782456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.012 [2024-07-12 11:28:54.782483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.012 qpair failed and we were unable to recover it. 00:24:29.012 [2024-07-12 11:28:54.782603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.012 [2024-07-12 11:28:54.782643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.012 qpair failed and we were unable to recover it. 00:24:29.012 [2024-07-12 11:28:54.782732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.012 [2024-07-12 11:28:54.782760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.012 qpair failed and we were unable to recover it. 00:24:29.012 [2024-07-12 11:28:54.782842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.012 [2024-07-12 11:28:54.782876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.012 qpair failed and we were unable to recover it. 00:24:29.012 [2024-07-12 11:28:54.782954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.012 [2024-07-12 11:28:54.782980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.012 qpair failed and we were unable to recover it. 00:24:29.012 [2024-07-12 11:28:54.783107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.012 [2024-07-12 11:28:54.783133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.012 qpair failed and we were unable to recover it. 00:24:29.012 [2024-07-12 11:28:54.783214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.012 [2024-07-12 11:28:54.783239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.012 qpair failed and we were unable to recover it. 00:24:29.012 [2024-07-12 11:28:54.783330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.012 [2024-07-12 11:28:54.783358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.012 qpair failed and we were unable to recover it. 00:24:29.012 [2024-07-12 11:28:54.783443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.012 [2024-07-12 11:28:54.783474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.012 qpair failed and we were unable to recover it. 00:24:29.012 [2024-07-12 11:28:54.783557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.012 [2024-07-12 11:28:54.783585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.012 qpair failed and we were unable to recover it. 00:24:29.012 [2024-07-12 11:28:54.783666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.012 [2024-07-12 11:28:54.783694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.012 qpair failed and we were unable to recover it. 00:24:29.012 [2024-07-12 11:28:54.783775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.012 [2024-07-12 11:28:54.783802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.012 qpair failed and we were unable to recover it. 00:24:29.012 [2024-07-12 11:28:54.783920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.012 [2024-07-12 11:28:54.783949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.012 qpair failed and we were unable to recover it. 00:24:29.012 [2024-07-12 11:28:54.784043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.012 [2024-07-12 11:28:54.784070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.012 qpair failed and we were unable to recover it. 00:24:29.012 [2024-07-12 11:28:54.784158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.012 [2024-07-12 11:28:54.784190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.012 qpair failed and we were unable to recover it. 00:24:29.012 [2024-07-12 11:28:54.784287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.012 [2024-07-12 11:28:54.784314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.012 qpair failed and we were unable to recover it. 00:24:29.012 [2024-07-12 11:28:54.784399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.012 [2024-07-12 11:28:54.784427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.012 qpair failed and we were unable to recover it. 00:24:29.012 [2024-07-12 11:28:54.784508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.012 [2024-07-12 11:28:54.784534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.012 qpair failed and we were unable to recover it. 00:24:29.012 [2024-07-12 11:28:54.784634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.012 [2024-07-12 11:28:54.784661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.012 qpair failed and we were unable to recover it. 00:24:29.012 [2024-07-12 11:28:54.784749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.012 [2024-07-12 11:28:54.784778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.012 qpair failed and we were unable to recover it. 00:24:29.012 [2024-07-12 11:28:54.784872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.012 [2024-07-12 11:28:54.784900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.012 qpair failed and we were unable to recover it. 00:24:29.012 [2024-07-12 11:28:54.784991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.012 [2024-07-12 11:28:54.785018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.012 qpair failed and we were unable to recover it. 00:24:29.012 [2024-07-12 11:28:54.785104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.012 [2024-07-12 11:28:54.785132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.012 qpair failed and we were unable to recover it. 00:24:29.012 [2024-07-12 11:28:54.785216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.012 [2024-07-12 11:28:54.785244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.012 qpair failed and we were unable to recover it. 00:24:29.012 [2024-07-12 11:28:54.785339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.012 [2024-07-12 11:28:54.785379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.012 qpair failed and we were unable to recover it. 00:24:29.012 [2024-07-12 11:28:54.785469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.013 [2024-07-12 11:28:54.785497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.013 qpair failed and we were unable to recover it. 00:24:29.013 [2024-07-12 11:28:54.785607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.013 [2024-07-12 11:28:54.785640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.013 qpair failed and we were unable to recover it. 00:24:29.013 [2024-07-12 11:28:54.785728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.013 [2024-07-12 11:28:54.785754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.013 qpair failed and we were unable to recover it. 00:24:29.013 [2024-07-12 11:28:54.785833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.013 [2024-07-12 11:28:54.785859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.013 qpair failed and we were unable to recover it. 00:24:29.013 [2024-07-12 11:28:54.785950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.013 [2024-07-12 11:28:54.785976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.013 qpair failed and we were unable to recover it. 00:24:29.013 [2024-07-12 11:28:54.786056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.013 [2024-07-12 11:28:54.786082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.013 qpair failed and we were unable to recover it. 00:24:29.013 [2024-07-12 11:28:54.786169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.013 [2024-07-12 11:28:54.786195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.013 qpair failed and we were unable to recover it. 00:24:29.013 [2024-07-12 11:28:54.786281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.013 [2024-07-12 11:28:54.786309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.013 qpair failed and we were unable to recover it. 00:24:29.013 [2024-07-12 11:28:54.786397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.013 [2024-07-12 11:28:54.786425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.013 qpair failed and we were unable to recover it. 00:24:29.013 [2024-07-12 11:28:54.786511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.013 [2024-07-12 11:28:54.786539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.013 qpair failed and we were unable to recover it. 00:24:29.013 [2024-07-12 11:28:54.786655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.013 [2024-07-12 11:28:54.786682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.013 qpair failed and we were unable to recover it. 00:24:29.013 [2024-07-12 11:28:54.786762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.013 [2024-07-12 11:28:54.786788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.013 qpair failed and we were unable to recover it. 00:24:29.013 [2024-07-12 11:28:54.786873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.013 [2024-07-12 11:28:54.786901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.013 qpair failed and we were unable to recover it. 00:24:29.013 [2024-07-12 11:28:54.786986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.013 [2024-07-12 11:28:54.787013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.013 qpair failed and we were unable to recover it. 00:24:29.013 [2024-07-12 11:28:54.787088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.013 [2024-07-12 11:28:54.787114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.013 qpair failed and we were unable to recover it. 00:24:29.013 [2024-07-12 11:28:54.787202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.013 [2024-07-12 11:28:54.787228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.013 qpair failed and we were unable to recover it. 00:24:29.013 [2024-07-12 11:28:54.787318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.013 [2024-07-12 11:28:54.787344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.013 qpair failed and we were unable to recover it. 00:24:29.013 [2024-07-12 11:28:54.787426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.013 [2024-07-12 11:28:54.787455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.013 qpair failed and we were unable to recover it. 00:24:29.013 [2024-07-12 11:28:54.787544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.013 [2024-07-12 11:28:54.787573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.013 qpair failed and we were unable to recover it. 00:24:29.013 [2024-07-12 11:28:54.787667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.013 [2024-07-12 11:28:54.787696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.013 qpair failed and we were unable to recover it. 00:24:29.013 [2024-07-12 11:28:54.787777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.013 [2024-07-12 11:28:54.787804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.013 qpair failed and we were unable to recover it. 00:24:29.013 [2024-07-12 11:28:54.787905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.013 [2024-07-12 11:28:54.787932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.013 qpair failed and we were unable to recover it. 00:24:29.013 [2024-07-12 11:28:54.788017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.013 [2024-07-12 11:28:54.788043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.013 qpair failed and we were unable to recover it. 00:24:29.013 [2024-07-12 11:28:54.788123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.013 [2024-07-12 11:28:54.788149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.013 qpair failed and we were unable to recover it. 00:24:29.013 [2024-07-12 11:28:54.788227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.013 [2024-07-12 11:28:54.788253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.013 qpair failed and we were unable to recover it. 00:24:29.013 [2024-07-12 11:28:54.788346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.013 [2024-07-12 11:28:54.788374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.013 qpair failed and we were unable to recover it. 00:24:29.013 [2024-07-12 11:28:54.788489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.013 [2024-07-12 11:28:54.788516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.013 qpair failed and we were unable to recover it. 00:24:29.013 [2024-07-12 11:28:54.788599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.013 [2024-07-12 11:28:54.788628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.013 qpair failed and we were unable to recover it. 00:24:29.013 [2024-07-12 11:28:54.788712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.013 [2024-07-12 11:28:54.788744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.013 qpair failed and we were unable to recover it. 00:24:29.013 [2024-07-12 11:28:54.788826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.013 [2024-07-12 11:28:54.788852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.013 qpair failed and we were unable to recover it. 00:24:29.013 [2024-07-12 11:28:54.788942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.013 [2024-07-12 11:28:54.788968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.013 qpair failed and we were unable to recover it. 00:24:29.013 [2024-07-12 11:28:54.789058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.013 [2024-07-12 11:28:54.789084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.013 qpair failed and we were unable to recover it. 00:24:29.013 [2024-07-12 11:28:54.789166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.013 [2024-07-12 11:28:54.789194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.013 qpair failed and we were unable to recover it. 00:24:29.013 [2024-07-12 11:28:54.789289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.013 [2024-07-12 11:28:54.789316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.013 qpair failed and we were unable to recover it. 00:24:29.013 [2024-07-12 11:28:54.789406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.013 [2024-07-12 11:28:54.789433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.013 qpair failed and we were unable to recover it. 00:24:29.013 [2024-07-12 11:28:54.789515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.013 [2024-07-12 11:28:54.789542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.014 qpair failed and we were unable to recover it. 00:24:29.014 [2024-07-12 11:28:54.789638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.014 [2024-07-12 11:28:54.789677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.014 qpair failed and we were unable to recover it. 00:24:29.014 [2024-07-12 11:28:54.789765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.014 [2024-07-12 11:28:54.789792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.014 qpair failed and we were unable to recover it. 00:24:29.014 [2024-07-12 11:28:54.789874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.014 [2024-07-12 11:28:54.789901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.014 qpair failed and we were unable to recover it. 00:24:29.014 [2024-07-12 11:28:54.789995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.014 [2024-07-12 11:28:54.790022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.014 qpair failed and we were unable to recover it. 00:24:29.014 [2024-07-12 11:28:54.790098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.014 [2024-07-12 11:28:54.790125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.014 qpair failed and we were unable to recover it. 00:24:29.014 [2024-07-12 11:28:54.790243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.014 [2024-07-12 11:28:54.790270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.014 qpair failed and we were unable to recover it. 00:24:29.014 [2024-07-12 11:28:54.790358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.014 [2024-07-12 11:28:54.790386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.014 qpair failed and we were unable to recover it. 00:24:29.014 [2024-07-12 11:28:54.790473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.014 [2024-07-12 11:28:54.790500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.014 qpair failed and we were unable to recover it. 00:24:29.014 [2024-07-12 11:28:54.790585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.014 [2024-07-12 11:28:54.790613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.014 qpair failed and we were unable to recover it. 00:24:29.014 [2024-07-12 11:28:54.790692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.014 [2024-07-12 11:28:54.790718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.014 qpair failed and we were unable to recover it. 00:24:29.014 [2024-07-12 11:28:54.790806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.014 [2024-07-12 11:28:54.790832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.014 qpair failed and we were unable to recover it. 00:24:29.014 [2024-07-12 11:28:54.790920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.014 [2024-07-12 11:28:54.790947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.014 qpair failed and we were unable to recover it. 00:24:29.014 [2024-07-12 11:28:54.791046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.014 [2024-07-12 11:28:54.791073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.014 qpair failed and we were unable to recover it. 00:24:29.014 [2024-07-12 11:28:54.791162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.014 [2024-07-12 11:28:54.791188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.014 qpair failed and we were unable to recover it. 00:24:29.014 [2024-07-12 11:28:54.791266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.014 [2024-07-12 11:28:54.791292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.014 qpair failed and we were unable to recover it. 00:24:29.014 [2024-07-12 11:28:54.791364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.014 [2024-07-12 11:28:54.791390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.014 qpair failed and we were unable to recover it. 00:24:29.014 [2024-07-12 11:28:54.791465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.014 [2024-07-12 11:28:54.791491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.014 qpair failed and we were unable to recover it. 00:24:29.014 [2024-07-12 11:28:54.791581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.014 [2024-07-12 11:28:54.791621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.014 qpair failed and we were unable to recover it. 00:24:29.014 [2024-07-12 11:28:54.791712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.014 [2024-07-12 11:28:54.791739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.014 qpair failed and we were unable to recover it. 00:24:29.014 [2024-07-12 11:28:54.791858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.014 [2024-07-12 11:28:54.791896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.014 qpair failed and we were unable to recover it. 00:24:29.014 [2024-07-12 11:28:54.791991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.014 [2024-07-12 11:28:54.792021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.014 qpair failed and we were unable to recover it. 00:24:29.014 [2024-07-12 11:28:54.792109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.014 [2024-07-12 11:28:54.792136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.014 qpair failed and we were unable to recover it. 00:24:29.014 [2024-07-12 11:28:54.792235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.014 [2024-07-12 11:28:54.792263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.014 qpair failed and we were unable to recover it. 00:24:29.014 [2024-07-12 11:28:54.792372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.014 [2024-07-12 11:28:54.792399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.014 qpair failed and we were unable to recover it. 00:24:29.014 [2024-07-12 11:28:54.792487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.014 [2024-07-12 11:28:54.792517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.014 qpair failed and we were unable to recover it. 00:24:29.014 [2024-07-12 11:28:54.792608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.014 [2024-07-12 11:28:54.792636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.014 qpair failed and we were unable to recover it. 00:24:29.014 [2024-07-12 11:28:54.792724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.014 [2024-07-12 11:28:54.792751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.014 qpair failed and we were unable to recover it. 00:24:29.014 [2024-07-12 11:28:54.792862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.014 [2024-07-12 11:28:54.792895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.014 qpair failed and we were unable to recover it. 00:24:29.014 [2024-07-12 11:28:54.792983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.014 [2024-07-12 11:28:54.793009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.014 qpair failed and we were unable to recover it. 00:24:29.014 [2024-07-12 11:28:54.793090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.014 [2024-07-12 11:28:54.793117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.014 qpair failed and we were unable to recover it. 00:24:29.014 [2024-07-12 11:28:54.793199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.014 [2024-07-12 11:28:54.793226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.014 qpair failed and we were unable to recover it. 00:24:29.014 [2024-07-12 11:28:54.793309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.014 [2024-07-12 11:28:54.793335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.014 qpair failed and we were unable to recover it. 00:24:29.014 [2024-07-12 11:28:54.793423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.014 [2024-07-12 11:28:54.793451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.014 qpair failed and we were unable to recover it. 00:24:29.014 [2024-07-12 11:28:54.793562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.014 [2024-07-12 11:28:54.793589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.014 qpair failed and we were unable to recover it. 00:24:29.014 [2024-07-12 11:28:54.793703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.014 [2024-07-12 11:28:54.793732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.014 qpair failed and we were unable to recover it. 00:24:29.014 [2024-07-12 11:28:54.793820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.014 [2024-07-12 11:28:54.793848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.014 qpair failed and we were unable to recover it. 00:24:29.014 [2024-07-12 11:28:54.793945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.014 [2024-07-12 11:28:54.793974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.014 qpair failed and we were unable to recover it. 00:24:29.014 [2024-07-12 11:28:54.794067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.014 [2024-07-12 11:28:54.794095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.014 qpair failed and we were unable to recover it. 00:24:29.014 [2024-07-12 11:28:54.794176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.014 [2024-07-12 11:28:54.794203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.014 qpair failed and we were unable to recover it. 00:24:29.014 [2024-07-12 11:28:54.794283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.014 [2024-07-12 11:28:54.794309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.014 qpair failed and we were unable to recover it. 00:24:29.014 [2024-07-12 11:28:54.794421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.014 [2024-07-12 11:28:54.794447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.015 qpair failed and we were unable to recover it. 00:24:29.015 [2024-07-12 11:28:54.794527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.015 [2024-07-12 11:28:54.794553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.015 qpair failed and we were unable to recover it. 00:24:29.015 [2024-07-12 11:28:54.794635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.015 [2024-07-12 11:28:54.794663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.015 qpair failed and we were unable to recover it. 00:24:29.015 [2024-07-12 11:28:54.794747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.015 [2024-07-12 11:28:54.794776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.015 qpair failed and we were unable to recover it. 00:24:29.015 [2024-07-12 11:28:54.794860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.015 [2024-07-12 11:28:54.794894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.015 qpair failed and we were unable to recover it. 00:24:29.015 [2024-07-12 11:28:54.794978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.015 [2024-07-12 11:28:54.795005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.015 qpair failed and we were unable to recover it. 00:24:29.015 [2024-07-12 11:28:54.795092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.015 [2024-07-12 11:28:54.795120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.015 qpair failed and we were unable to recover it. 00:24:29.015 [2024-07-12 11:28:54.795207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.015 [2024-07-12 11:28:54.795233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.015 qpair failed and we were unable to recover it. 00:24:29.015 [2024-07-12 11:28:54.795341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.015 [2024-07-12 11:28:54.795367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.015 qpair failed and we were unable to recover it. 00:24:29.015 [2024-07-12 11:28:54.795461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.015 [2024-07-12 11:28:54.795487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.015 qpair failed and we were unable to recover it. 00:24:29.015 [2024-07-12 11:28:54.795567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.015 [2024-07-12 11:28:54.795593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.015 qpair failed and we were unable to recover it. 00:24:29.015 [2024-07-12 11:28:54.795678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.015 [2024-07-12 11:28:54.795705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.015 qpair failed and we were unable to recover it. 00:24:29.015 [2024-07-12 11:28:54.795815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.015 [2024-07-12 11:28:54.795842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.015 qpair failed and we were unable to recover it. 00:24:29.015 [2024-07-12 11:28:54.795934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.015 [2024-07-12 11:28:54.795963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.015 qpair failed and we were unable to recover it. 00:24:29.015 [2024-07-12 11:28:54.796049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.015 [2024-07-12 11:28:54.796076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.015 qpair failed and we were unable to recover it. 00:24:29.015 [2024-07-12 11:28:54.796163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.015 [2024-07-12 11:28:54.796190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.015 qpair failed and we were unable to recover it. 00:24:29.015 [2024-07-12 11:28:54.796286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.015 [2024-07-12 11:28:54.796313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.015 qpair failed and we were unable to recover it. 00:24:29.015 [2024-07-12 11:28:54.796411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.015 [2024-07-12 11:28:54.796439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.015 qpair failed and we were unable to recover it. 00:24:29.015 [2024-07-12 11:28:54.796519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.015 [2024-07-12 11:28:54.796548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.015 qpair failed and we were unable to recover it. 00:24:29.015 [2024-07-12 11:28:54.796631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.015 [2024-07-12 11:28:54.796663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.015 qpair failed and we were unable to recover it. 00:24:29.015 [2024-07-12 11:28:54.796745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.015 [2024-07-12 11:28:54.796771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.015 qpair failed and we were unable to recover it. 00:24:29.015 [2024-07-12 11:28:54.796848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.015 [2024-07-12 11:28:54.796882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.015 qpair failed and we were unable to recover it. 00:24:29.015 [2024-07-12 11:28:54.796972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.015 [2024-07-12 11:28:54.796998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.015 qpair failed and we were unable to recover it. 00:24:29.015 [2024-07-12 11:28:54.797076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.015 [2024-07-12 11:28:54.797103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.015 qpair failed and we were unable to recover it. 00:24:29.015 [2024-07-12 11:28:54.797225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.015 [2024-07-12 11:28:54.797251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.015 qpair failed and we were unable to recover it. 00:24:29.015 [2024-07-12 11:28:54.797329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.015 [2024-07-12 11:28:54.797355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.015 qpair failed and we were unable to recover it. 00:24:29.015 [2024-07-12 11:28:54.797467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.015 [2024-07-12 11:28:54.797494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.015 qpair failed and we were unable to recover it. 00:24:29.015 [2024-07-12 11:28:54.797574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.015 [2024-07-12 11:28:54.797599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.015 qpair failed and we were unable to recover it. 00:24:29.015 [2024-07-12 11:28:54.797705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.015 [2024-07-12 11:28:54.797745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.015 qpair failed and we were unable to recover it. 00:24:29.015 [2024-07-12 11:28:54.797837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.015 [2024-07-12 11:28:54.797873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.015 qpair failed and we were unable to recover it. 00:24:29.015 [2024-07-12 11:28:54.797978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.015 [2024-07-12 11:28:54.798018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.015 qpair failed and we were unable to recover it. 00:24:29.015 [2024-07-12 11:28:54.798100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.015 [2024-07-12 11:28:54.798127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.015 qpair failed and we were unable to recover it. 00:24:29.015 [2024-07-12 11:28:54.798203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.015 [2024-07-12 11:28:54.798229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.015 qpair failed and we were unable to recover it. 00:24:29.015 [2024-07-12 11:28:54.798334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.015 [2024-07-12 11:28:54.798360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.015 qpair failed and we were unable to recover it. 00:24:29.015 [2024-07-12 11:28:54.798438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.015 [2024-07-12 11:28:54.798464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.015 qpair failed and we were unable to recover it. 00:24:29.015 [2024-07-12 11:28:54.798553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.015 [2024-07-12 11:28:54.798582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.015 qpair failed and we were unable to recover it. 00:24:29.015 [2024-07-12 11:28:54.798675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.015 [2024-07-12 11:28:54.798704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.015 qpair failed and we were unable to recover it. 00:24:29.015 [2024-07-12 11:28:54.798793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.015 [2024-07-12 11:28:54.798820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.015 qpair failed and we were unable to recover it. 00:24:29.015 [2024-07-12 11:28:54.798915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.015 [2024-07-12 11:28:54.798942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.015 qpair failed and we were unable to recover it. 00:24:29.015 [2024-07-12 11:28:54.799029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.015 [2024-07-12 11:28:54.799055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.015 qpair failed and we were unable to recover it. 00:24:29.015 [2024-07-12 11:28:54.799138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.015 [2024-07-12 11:28:54.799164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.015 qpair failed and we were unable to recover it. 00:24:29.016 [2024-07-12 11:28:54.799359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.016 [2024-07-12 11:28:54.799386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.016 qpair failed and we were unable to recover it. 00:24:29.016 [2024-07-12 11:28:54.799469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.016 [2024-07-12 11:28:54.799495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.016 qpair failed and we were unable to recover it. 00:24:29.016 [2024-07-12 11:28:54.799604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.016 [2024-07-12 11:28:54.799632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.016 qpair failed and we were unable to recover it. 00:24:29.016 [2024-07-12 11:28:54.799722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.016 [2024-07-12 11:28:54.799749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.016 qpair failed and we were unable to recover it. 00:24:29.016 [2024-07-12 11:28:54.799831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.016 [2024-07-12 11:28:54.799858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.016 qpair failed and we were unable to recover it. 00:24:29.016 [2024-07-12 11:28:54.799957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.016 [2024-07-12 11:28:54.799988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.016 qpair failed and we were unable to recover it. 00:24:29.016 [2024-07-12 11:28:54.800069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.016 [2024-07-12 11:28:54.800096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.016 qpair failed and we were unable to recover it. 00:24:29.016 [2024-07-12 11:28:54.800181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.016 [2024-07-12 11:28:54.800208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.016 qpair failed and we were unable to recover it. 00:24:29.016 [2024-07-12 11:28:54.800287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.016 [2024-07-12 11:28:54.800314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.016 qpair failed and we were unable to recover it. 00:24:29.016 [2024-07-12 11:28:54.800401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.016 [2024-07-12 11:28:54.800427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.016 qpair failed and we were unable to recover it. 00:24:29.016 [2024-07-12 11:28:54.800513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.016 [2024-07-12 11:28:54.800539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.016 qpair failed and we were unable to recover it. 00:24:29.016 [2024-07-12 11:28:54.800651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.016 [2024-07-12 11:28:54.800677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.016 qpair failed and we were unable to recover it. 00:24:29.016 [2024-07-12 11:28:54.800770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.016 [2024-07-12 11:28:54.800796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.016 qpair failed and we were unable to recover it. 00:24:29.016 [2024-07-12 11:28:54.800885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.016 [2024-07-12 11:28:54.800912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.016 qpair failed and we were unable to recover it. 00:24:29.016 [2024-07-12 11:28:54.801105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.016 [2024-07-12 11:28:54.801131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.016 qpair failed and we were unable to recover it. 00:24:29.016 [2024-07-12 11:28:54.801208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.016 [2024-07-12 11:28:54.801234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.016 qpair failed and we were unable to recover it. 00:24:29.016 [2024-07-12 11:28:54.801339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.016 [2024-07-12 11:28:54.801366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.016 qpair failed and we were unable to recover it. 00:24:29.016 [2024-07-12 11:28:54.801445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.016 [2024-07-12 11:28:54.801471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.016 qpair failed and we were unable to recover it. 00:24:29.016 [2024-07-12 11:28:54.801561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.016 [2024-07-12 11:28:54.801587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.016 qpair failed and we were unable to recover it. 00:24:29.016 [2024-07-12 11:28:54.801664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.016 [2024-07-12 11:28:54.801691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.016 qpair failed and we were unable to recover it. 00:24:29.016 [2024-07-12 11:28:54.801779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.016 [2024-07-12 11:28:54.801807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.016 qpair failed and we were unable to recover it. 00:24:29.016 [2024-07-12 11:28:54.801905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.016 [2024-07-12 11:28:54.801932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.016 qpair failed and we were unable to recover it. 00:24:29.016 [2024-07-12 11:28:54.802024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.016 [2024-07-12 11:28:54.802051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.016 qpair failed and we were unable to recover it. 00:24:29.016 [2024-07-12 11:28:54.802135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.016 [2024-07-12 11:28:54.802162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.016 qpair failed and we were unable to recover it. 00:24:29.016 [2024-07-12 11:28:54.802240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.016 [2024-07-12 11:28:54.802267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.016 qpair failed and we were unable to recover it. 00:24:29.016 [2024-07-12 11:28:54.802374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.016 [2024-07-12 11:28:54.802400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.016 qpair failed and we were unable to recover it. 00:24:29.016 [2024-07-12 11:28:54.802485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.016 [2024-07-12 11:28:54.802512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.016 qpair failed and we were unable to recover it. 00:24:29.016 [2024-07-12 11:28:54.802603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.016 [2024-07-12 11:28:54.802629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.016 qpair failed and we were unable to recover it. 00:24:29.016 [2024-07-12 11:28:54.802706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.016 [2024-07-12 11:28:54.802732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.016 qpair failed and we were unable to recover it. 00:24:29.016 [2024-07-12 11:28:54.802815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.016 [2024-07-12 11:28:54.802841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.016 qpair failed and we were unable to recover it. 00:24:29.016 [2024-07-12 11:28:54.802940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.016 [2024-07-12 11:28:54.802980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.016 qpair failed and we were unable to recover it. 00:24:29.016 [2024-07-12 11:28:54.803066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.016 [2024-07-12 11:28:54.803094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.016 qpair failed and we were unable to recover it. 00:24:29.016 [2024-07-12 11:28:54.803177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.016 [2024-07-12 11:28:54.803211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.016 qpair failed and we were unable to recover it. 00:24:29.016 [2024-07-12 11:28:54.803293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.016 [2024-07-12 11:28:54.803319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.016 qpair failed and we were unable to recover it. 00:24:29.016 [2024-07-12 11:28:54.803421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.016 [2024-07-12 11:28:54.803453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.016 qpair failed and we were unable to recover it. 00:24:29.016 [2024-07-12 11:28:54.803541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.016 [2024-07-12 11:28:54.803569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.016 qpair failed and we were unable to recover it. 00:24:29.016 [2024-07-12 11:28:54.803655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.016 [2024-07-12 11:28:54.803682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.016 qpair failed and we were unable to recover it. 00:24:29.016 [2024-07-12 11:28:54.803772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.016 [2024-07-12 11:28:54.803798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.016 qpair failed and we were unable to recover it. 00:24:29.016 [2024-07-12 11:28:54.803885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.016 [2024-07-12 11:28:54.803913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.016 qpair failed and we were unable to recover it. 00:24:29.016 [2024-07-12 11:28:54.803992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.016 [2024-07-12 11:28:54.804018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.016 qpair failed and we were unable to recover it. 00:24:29.016 [2024-07-12 11:28:54.804111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.016 [2024-07-12 11:28:54.804137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.016 qpair failed and we were unable to recover it. 00:24:29.016 [2024-07-12 11:28:54.804214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.017 [2024-07-12 11:28:54.804240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.017 qpair failed and we were unable to recover it. 00:24:29.017 [2024-07-12 11:28:54.804317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.017 [2024-07-12 11:28:54.804343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.017 qpair failed and we were unable to recover it. 00:24:29.017 [2024-07-12 11:28:54.804418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.017 [2024-07-12 11:28:54.804446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.017 qpair failed and we were unable to recover it. 00:24:29.017 [2024-07-12 11:28:54.804529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.017 [2024-07-12 11:28:54.804558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.017 qpair failed and we were unable to recover it. 00:24:29.017 [2024-07-12 11:28:54.804663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.017 [2024-07-12 11:28:54.804704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.017 qpair failed and we were unable to recover it. 00:24:29.017 [2024-07-12 11:28:54.804801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.017 [2024-07-12 11:28:54.804829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.017 qpair failed and we were unable to recover it. 00:24:29.017 [2024-07-12 11:28:54.804921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.017 [2024-07-12 11:28:54.804949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.017 qpair failed and we were unable to recover it. 00:24:29.017 [2024-07-12 11:28:54.805043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.017 [2024-07-12 11:28:54.805071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.017 qpair failed and we were unable to recover it. 00:24:29.017 [2024-07-12 11:28:54.805157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.017 [2024-07-12 11:28:54.805183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.017 qpair failed and we were unable to recover it. 00:24:29.017 [2024-07-12 11:28:54.805267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.017 [2024-07-12 11:28:54.805294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.017 qpair failed and we were unable to recover it. 00:24:29.017 [2024-07-12 11:28:54.805379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.017 [2024-07-12 11:28:54.805406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.017 qpair failed and we were unable to recover it. 00:24:29.017 [2024-07-12 11:28:54.805494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.017 [2024-07-12 11:28:54.805521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.017 qpair failed and we were unable to recover it. 00:24:29.017 [2024-07-12 11:28:54.805603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.017 [2024-07-12 11:28:54.805632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.017 qpair failed and we were unable to recover it. 00:24:29.017 [2024-07-12 11:28:54.805716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.017 [2024-07-12 11:28:54.805745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.017 qpair failed and we were unable to recover it. 00:24:29.017 [2024-07-12 11:28:54.805827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.017 [2024-07-12 11:28:54.805854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.017 qpair failed and we were unable to recover it. 00:24:29.017 [2024-07-12 11:28:54.805951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.017 [2024-07-12 11:28:54.805978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.017 qpair failed and we were unable to recover it. 00:24:29.017 [2024-07-12 11:28:54.806060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.017 [2024-07-12 11:28:54.806087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.017 qpair failed and we were unable to recover it. 00:24:29.017 [2024-07-12 11:28:54.806171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.017 [2024-07-12 11:28:54.806200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.017 qpair failed and we were unable to recover it. 00:24:29.017 [2024-07-12 11:28:54.806399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.017 [2024-07-12 11:28:54.806427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.017 qpair failed and we were unable to recover it. 00:24:29.017 [2024-07-12 11:28:54.806547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.017 [2024-07-12 11:28:54.806573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.017 qpair failed and we were unable to recover it. 00:24:29.017 [2024-07-12 11:28:54.806654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.017 [2024-07-12 11:28:54.806679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.017 qpair failed and we were unable to recover it. 00:24:29.017 [2024-07-12 11:28:54.806879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.017 [2024-07-12 11:28:54.806905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.017 qpair failed and we were unable to recover it. 00:24:29.017 [2024-07-12 11:28:54.806999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.017 [2024-07-12 11:28:54.807026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.017 qpair failed and we were unable to recover it. 00:24:29.017 [2024-07-12 11:28:54.807110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.017 [2024-07-12 11:28:54.807136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.017 qpair failed and we were unable to recover it. 00:24:29.017 [2024-07-12 11:28:54.807210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.017 [2024-07-12 11:28:54.807236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.017 qpair failed and we were unable to recover it. 00:24:29.017 [2024-07-12 11:28:54.807318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.017 [2024-07-12 11:28:54.807344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.017 qpair failed and we were unable to recover it. 00:24:29.017 [2024-07-12 11:28:54.807422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.017 [2024-07-12 11:28:54.807448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.017 qpair failed and we were unable to recover it. 00:24:29.017 [2024-07-12 11:28:54.807526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.017 [2024-07-12 11:28:54.807552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.017 qpair failed and we were unable to recover it. 00:24:29.017 [2024-07-12 11:28:54.807648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.017 [2024-07-12 11:28:54.807688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.017 qpair failed and we were unable to recover it. 00:24:29.017 [2024-07-12 11:28:54.807779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.017 [2024-07-12 11:28:54.807806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.017 qpair failed and we were unable to recover it. 00:24:29.017 [2024-07-12 11:28:54.807892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.017 [2024-07-12 11:28:54.807921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.017 qpair failed and we were unable to recover it. 00:24:29.017 [2024-07-12 11:28:54.808010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.017 [2024-07-12 11:28:54.808036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.017 qpair failed and we were unable to recover it. 00:24:29.017 [2024-07-12 11:28:54.808136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.017 [2024-07-12 11:28:54.808166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.017 qpair failed and we were unable to recover it. 00:24:29.017 [2024-07-12 11:28:54.808287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.017 [2024-07-12 11:28:54.808314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.017 qpair failed and we were unable to recover it. 00:24:29.017 [2024-07-12 11:28:54.808394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.017 [2024-07-12 11:28:54.808421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.017 qpair failed and we were unable to recover it. 00:24:29.017 [2024-07-12 11:28:54.808501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.017 [2024-07-12 11:28:54.808530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.018 qpair failed and we were unable to recover it. 00:24:29.018 [2024-07-12 11:28:54.808621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.018 [2024-07-12 11:28:54.808650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.018 qpair failed and we were unable to recover it. 00:24:29.018 [2024-07-12 11:28:54.808762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.018 [2024-07-12 11:28:54.808790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.018 qpair failed and we were unable to recover it. 00:24:29.018 [2024-07-12 11:28:54.808883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.018 [2024-07-12 11:28:54.808913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.018 qpair failed and we were unable to recover it. 00:24:29.018 [2024-07-12 11:28:54.809000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.018 [2024-07-12 11:28:54.809027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.018 qpair failed and we were unable to recover it. 00:24:29.018 [2024-07-12 11:28:54.809122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.018 [2024-07-12 11:28:54.809149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.018 qpair failed and we were unable to recover it. 00:24:29.018 [2024-07-12 11:28:54.809228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.018 [2024-07-12 11:28:54.809255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.018 qpair failed and we were unable to recover it. 00:24:29.018 [2024-07-12 11:28:54.809377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.018 [2024-07-12 11:28:54.809404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.018 qpair failed and we were unable to recover it. 00:24:29.018 [2024-07-12 11:28:54.809486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.018 [2024-07-12 11:28:54.809513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.018 qpair failed and we were unable to recover it. 00:24:29.018 [2024-07-12 11:28:54.809610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.018 [2024-07-12 11:28:54.809650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.018 qpair failed and we were unable to recover it. 00:24:29.018 [2024-07-12 11:28:54.809748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.018 [2024-07-12 11:28:54.809776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.018 qpair failed and we were unable to recover it. 00:24:29.018 [2024-07-12 11:28:54.809874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.018 [2024-07-12 11:28:54.809903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.018 qpair failed and we were unable to recover it. 00:24:29.018 [2024-07-12 11:28:54.809985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.018 [2024-07-12 11:28:54.810013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.018 qpair failed and we were unable to recover it. 00:24:29.018 [2024-07-12 11:28:54.810100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.018 [2024-07-12 11:28:54.810126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.018 qpair failed and we were unable to recover it. 00:24:29.018 [2024-07-12 11:28:54.810223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.018 [2024-07-12 11:28:54.810249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.018 qpair failed and we were unable to recover it. 00:24:29.018 [2024-07-12 11:28:54.810342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.018 [2024-07-12 11:28:54.810370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.018 qpair failed and we were unable to recover it. 00:24:29.018 [2024-07-12 11:28:54.810484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.018 [2024-07-12 11:28:54.810513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.018 qpair failed and we were unable to recover it. 00:24:29.018 [2024-07-12 11:28:54.810591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.018 [2024-07-12 11:28:54.810617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.018 qpair failed and we were unable to recover it. 00:24:29.018 [2024-07-12 11:28:54.810696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.018 [2024-07-12 11:28:54.810722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.018 qpair failed and we were unable to recover it. 00:24:29.018 [2024-07-12 11:28:54.810811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.018 [2024-07-12 11:28:54.810837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.018 qpair failed and we were unable to recover it. 00:24:29.018 [2024-07-12 11:28:54.810930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.018 [2024-07-12 11:28:54.810959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.018 qpair failed and we were unable to recover it. 00:24:29.018 [2024-07-12 11:28:54.811059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.018 [2024-07-12 11:28:54.811088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.018 qpair failed and we were unable to recover it. 00:24:29.018 [2024-07-12 11:28:54.811178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.018 [2024-07-12 11:28:54.811205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.018 qpair failed and we were unable to recover it. 00:24:29.018 [2024-07-12 11:28:54.811282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.018 [2024-07-12 11:28:54.811313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.018 qpair failed and we were unable to recover it. 00:24:29.018 [2024-07-12 11:28:54.811428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.018 [2024-07-12 11:28:54.811455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.018 qpair failed and we were unable to recover it. 00:24:29.018 [2024-07-12 11:28:54.811539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.018 [2024-07-12 11:28:54.811566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.018 qpair failed and we were unable to recover it. 00:24:29.018 [2024-07-12 11:28:54.811648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.018 [2024-07-12 11:28:54.811676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.018 qpair failed and we were unable to recover it. 00:24:29.018 [2024-07-12 11:28:54.811759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.018 [2024-07-12 11:28:54.811786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.018 qpair failed and we were unable to recover it. 00:24:29.018 [2024-07-12 11:28:54.811901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.018 [2024-07-12 11:28:54.811928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.018 qpair failed and we were unable to recover it. 00:24:29.018 [2024-07-12 11:28:54.812023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.018 [2024-07-12 11:28:54.812050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.018 qpair failed and we were unable to recover it. 00:24:29.018 [2024-07-12 11:28:54.812128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.018 [2024-07-12 11:28:54.812154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.018 qpair failed and we were unable to recover it. 00:24:29.018 [2024-07-12 11:28:54.812231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.018 [2024-07-12 11:28:54.812257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.018 qpair failed and we were unable to recover it. 00:24:29.018 [2024-07-12 11:28:54.812337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.018 [2024-07-12 11:28:54.812363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.018 qpair failed and we were unable to recover it. 00:24:29.018 [2024-07-12 11:28:54.812443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.018 [2024-07-12 11:28:54.812469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.018 qpair failed and we were unable to recover it. 00:24:29.018 [2024-07-12 11:28:54.812561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.018 [2024-07-12 11:28:54.812587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.018 qpair failed and we were unable to recover it. 00:24:29.018 [2024-07-12 11:28:54.812663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.018 [2024-07-12 11:28:54.812689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.018 qpair failed and we were unable to recover it. 00:24:29.018 [2024-07-12 11:28:54.812775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.018 [2024-07-12 11:28:54.812814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.018 qpair failed and we were unable to recover it. 00:24:29.018 [2024-07-12 11:28:54.812907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.018 [2024-07-12 11:28:54.812934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.018 qpair failed and we were unable to recover it. 00:24:29.018 [2024-07-12 11:28:54.813017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.018 [2024-07-12 11:28:54.813043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.018 qpair failed and we were unable to recover it. 00:24:29.018 [2024-07-12 11:28:54.813129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.018 [2024-07-12 11:28:54.813155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.018 qpair failed and we were unable to recover it. 00:24:29.018 [2024-07-12 11:28:54.813236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.018 [2024-07-12 11:28:54.813262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.018 qpair failed and we were unable to recover it. 00:24:29.019 [2024-07-12 11:28:54.813338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.019 [2024-07-12 11:28:54.813364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.019 qpair failed and we were unable to recover it. 00:24:29.019 [2024-07-12 11:28:54.813474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.019 [2024-07-12 11:28:54.813500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.019 qpair failed and we were unable to recover it. 00:24:29.019 [2024-07-12 11:28:54.813576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.019 [2024-07-12 11:28:54.813602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.019 qpair failed and we were unable to recover it. 00:24:29.019 [2024-07-12 11:28:54.813686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.019 [2024-07-12 11:28:54.813712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.019 qpair failed and we were unable to recover it. 00:24:29.019 [2024-07-12 11:28:54.813798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.019 [2024-07-12 11:28:54.813828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.019 qpair failed and we were unable to recover it. 00:24:29.019 [2024-07-12 11:28:54.813933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.019 [2024-07-12 11:28:54.813974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.019 qpair failed and we were unable to recover it. 00:24:29.019 [2024-07-12 11:28:54.814065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.019 [2024-07-12 11:28:54.814092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.019 qpair failed and we were unable to recover it. 00:24:29.019 [2024-07-12 11:28:54.814173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.019 [2024-07-12 11:28:54.814200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.019 qpair failed and we were unable to recover it. 00:24:29.019 [2024-07-12 11:28:54.814276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.019 [2024-07-12 11:28:54.814302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.019 qpair failed and we were unable to recover it. 00:24:29.019 [2024-07-12 11:28:54.814401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.019 [2024-07-12 11:28:54.814437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.019 qpair failed and we were unable to recover it. 00:24:29.019 [2024-07-12 11:28:54.814524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.019 [2024-07-12 11:28:54.814551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.019 qpair failed and we were unable to recover it. 00:24:29.019 [2024-07-12 11:28:54.814638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.019 [2024-07-12 11:28:54.814665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.019 qpair failed and we were unable to recover it. 00:24:29.019 [2024-07-12 11:28:54.814746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.019 [2024-07-12 11:28:54.814773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.019 qpair failed and we were unable to recover it. 00:24:29.019 [2024-07-12 11:28:54.814855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.019 [2024-07-12 11:28:54.814889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.019 qpair failed and we were unable to recover it. 00:24:29.019 [2024-07-12 11:28:54.814978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.019 [2024-07-12 11:28:54.815005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.019 qpair failed and we were unable to recover it. 00:24:29.019 [2024-07-12 11:28:54.815119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.019 [2024-07-12 11:28:54.815148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.019 qpair failed and we were unable to recover it. 00:24:29.019 [2024-07-12 11:28:54.815234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.019 [2024-07-12 11:28:54.815260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.019 qpair failed and we were unable to recover it. 00:24:29.019 [2024-07-12 11:28:54.815342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.019 [2024-07-12 11:28:54.815369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.019 qpair failed and we were unable to recover it. 00:24:29.019 [2024-07-12 11:28:54.815477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.019 [2024-07-12 11:28:54.815504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.019 qpair failed and we were unable to recover it. 00:24:29.019 [2024-07-12 11:28:54.815586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.019 [2024-07-12 11:28:54.815614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.019 qpair failed and we were unable to recover it. 00:24:29.019 [2024-07-12 11:28:54.815705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.019 [2024-07-12 11:28:54.815734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.019 qpair failed and we were unable to recover it. 00:24:29.019 [2024-07-12 11:28:54.815817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.019 [2024-07-12 11:28:54.815845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.019 qpair failed and we were unable to recover it. 00:24:29.019 [2024-07-12 11:28:54.815944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.019 [2024-07-12 11:28:54.815972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.019 qpair failed and we were unable to recover it. 00:24:29.019 [2024-07-12 11:28:54.816081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.019 [2024-07-12 11:28:54.816108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.019 qpair failed and we were unable to recover it. 00:24:29.019 [2024-07-12 11:28:54.816194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.019 [2024-07-12 11:28:54.816221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.019 qpair failed and we were unable to recover it. 00:24:29.019 [2024-07-12 11:28:54.816303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.019 [2024-07-12 11:28:54.816329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.019 qpair failed and we were unable to recover it. 00:24:29.019 [2024-07-12 11:28:54.816406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.019 [2024-07-12 11:28:54.816433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.019 qpair failed and we were unable to recover it. 00:24:29.019 [2024-07-12 11:28:54.816511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.019 [2024-07-12 11:28:54.816538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.019 qpair failed and we were unable to recover it. 00:24:29.019 [2024-07-12 11:28:54.816629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.019 [2024-07-12 11:28:54.816655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.019 qpair failed and we were unable to recover it. 00:24:29.019 [2024-07-12 11:28:54.816742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.019 [2024-07-12 11:28:54.816769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.019 qpair failed and we were unable to recover it. 00:24:29.019 [2024-07-12 11:28:54.816846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.019 [2024-07-12 11:28:54.816878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.019 qpair failed and we were unable to recover it. 00:24:29.019 [2024-07-12 11:28:54.816960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.019 [2024-07-12 11:28:54.816987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.019 qpair failed and we were unable to recover it. 00:24:29.019 [2024-07-12 11:28:54.817063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.019 [2024-07-12 11:28:54.817089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.019 qpair failed and we were unable to recover it. 00:24:29.019 [2024-07-12 11:28:54.817177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.019 [2024-07-12 11:28:54.817203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.019 qpair failed and we were unable to recover it. 00:24:29.019 [2024-07-12 11:28:54.817287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.019 [2024-07-12 11:28:54.817316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.019 qpair failed and we were unable to recover it. 00:24:29.019 [2024-07-12 11:28:54.817428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.019 [2024-07-12 11:28:54.817455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.019 qpair failed and we were unable to recover it. 00:24:29.019 [2024-07-12 11:28:54.817576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.019 [2024-07-12 11:28:54.817605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.019 qpair failed and we were unable to recover it. 00:24:29.019 [2024-07-12 11:28:54.817690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.019 [2024-07-12 11:28:54.817717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.019 qpair failed and we were unable to recover it. 00:24:29.019 [2024-07-12 11:28:54.817800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.019 [2024-07-12 11:28:54.817826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.019 qpair failed and we were unable to recover it. 00:24:29.019 [2024-07-12 11:28:54.817915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.019 [2024-07-12 11:28:54.817942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.019 qpair failed and we were unable to recover it. 00:24:29.019 [2024-07-12 11:28:54.818035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.019 [2024-07-12 11:28:54.818062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.019 qpair failed and we were unable to recover it. 00:24:29.020 [2024-07-12 11:28:54.818146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.020 [2024-07-12 11:28:54.818172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.020 qpair failed and we were unable to recover it. 00:24:29.020 [2024-07-12 11:28:54.818247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.020 [2024-07-12 11:28:54.818274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.020 qpair failed and we were unable to recover it. 00:24:29.020 [2024-07-12 11:28:54.818359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.020 [2024-07-12 11:28:54.818387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.020 qpair failed and we were unable to recover it. 00:24:29.020 [2024-07-12 11:28:54.818470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.020 [2024-07-12 11:28:54.818497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.020 qpair failed and we were unable to recover it. 00:24:29.020 [2024-07-12 11:28:54.818586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.020 [2024-07-12 11:28:54.818613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.020 qpair failed and we were unable to recover it. 00:24:29.020 [2024-07-12 11:28:54.818702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.020 [2024-07-12 11:28:54.818731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.020 qpair failed and we were unable to recover it. 00:24:29.020 [2024-07-12 11:28:54.818805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.020 [2024-07-12 11:28:54.818832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.020 qpair failed and we were unable to recover it. 00:24:29.020 [2024-07-12 11:28:54.818932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.020 [2024-07-12 11:28:54.818960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.020 qpair failed and we were unable to recover it. 00:24:29.020 [2024-07-12 11:28:54.819044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.020 [2024-07-12 11:28:54.819075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.020 qpair failed and we were unable to recover it. 00:24:29.020 [2024-07-12 11:28:54.819162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.020 [2024-07-12 11:28:54.819188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.020 qpair failed and we were unable to recover it. 00:24:29.020 [2024-07-12 11:28:54.819269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.020 [2024-07-12 11:28:54.819295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.020 qpair failed and we were unable to recover it. 00:24:29.020 [2024-07-12 11:28:54.819385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.020 [2024-07-12 11:28:54.819411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.020 qpair failed and we were unable to recover it. 00:24:29.020 [2024-07-12 11:28:54.819518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.020 [2024-07-12 11:28:54.819544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.020 qpair failed and we were unable to recover it. 00:24:29.020 [2024-07-12 11:28:54.819636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.020 [2024-07-12 11:28:54.819662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.020 qpair failed and we were unable to recover it. 00:24:29.020 [2024-07-12 11:28:54.819737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.020 [2024-07-12 11:28:54.819762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.020 qpair failed and we were unable to recover it. 00:24:29.020 [2024-07-12 11:28:54.819841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.020 [2024-07-12 11:28:54.819874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.020 qpair failed and we were unable to recover it. 00:24:29.020 [2024-07-12 11:28:54.819958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.020 [2024-07-12 11:28:54.819984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.020 qpair failed and we were unable to recover it. 00:24:29.020 [2024-07-12 11:28:54.820064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.020 [2024-07-12 11:28:54.820090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.020 qpair failed and we were unable to recover it. 00:24:29.020 [2024-07-12 11:28:54.820171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.020 [2024-07-12 11:28:54.820196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.020 qpair failed and we were unable to recover it. 00:24:29.020 [2024-07-12 11:28:54.820271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.020 [2024-07-12 11:28:54.820297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.020 qpair failed and we were unable to recover it. 00:24:29.020 [2024-07-12 11:28:54.820406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.020 [2024-07-12 11:28:54.820432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.020 qpair failed and we were unable to recover it. 00:24:29.020 [2024-07-12 11:28:54.820509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.020 [2024-07-12 11:28:54.820535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.020 qpair failed and we were unable to recover it. 00:24:29.020 [2024-07-12 11:28:54.820627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.020 [2024-07-12 11:28:54.820656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.020 qpair failed and we were unable to recover it. 00:24:29.020 [2024-07-12 11:28:54.820738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.020 [2024-07-12 11:28:54.820765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.020 qpair failed and we were unable to recover it. 00:24:29.020 [2024-07-12 11:28:54.820848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.020 [2024-07-12 11:28:54.820881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.020 qpair failed and we were unable to recover it. 00:24:29.020 [2024-07-12 11:28:54.820965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.020 [2024-07-12 11:28:54.820992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.020 qpair failed and we were unable to recover it. 00:24:29.020 [2024-07-12 11:28:54.821082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.020 [2024-07-12 11:28:54.821108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.020 qpair failed and we were unable to recover it. 00:24:29.020 [2024-07-12 11:28:54.821220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.020 [2024-07-12 11:28:54.821247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.020 qpair failed and we were unable to recover it. 00:24:29.020 [2024-07-12 11:28:54.821324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.020 [2024-07-12 11:28:54.821351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.020 qpair failed and we were unable to recover it. 00:24:29.020 [2024-07-12 11:28:54.821432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.020 [2024-07-12 11:28:54.821458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.020 qpair failed and we were unable to recover it. 00:24:29.020 [2024-07-12 11:28:54.821549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.020 [2024-07-12 11:28:54.821575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.020 qpair failed and we were unable to recover it. 00:24:29.020 [2024-07-12 11:28:54.821653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.020 [2024-07-12 11:28:54.821680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.020 qpair failed and we were unable to recover it. 00:24:29.020 [2024-07-12 11:28:54.821758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.020 [2024-07-12 11:28:54.821784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.020 qpair failed and we were unable to recover it. 00:24:29.020 [2024-07-12 11:28:54.821901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.020 [2024-07-12 11:28:54.821929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.020 qpair failed and we were unable to recover it. 00:24:29.020 [2024-07-12 11:28:54.822008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.020 [2024-07-12 11:28:54.822035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.020 qpair failed and we were unable to recover it. 00:24:29.020 [2024-07-12 11:28:54.822126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.020 [2024-07-12 11:28:54.822152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.020 qpair failed and we were unable to recover it. 00:24:29.020 [2024-07-12 11:28:54.822232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.020 [2024-07-12 11:28:54.822258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.020 qpair failed and we were unable to recover it. 00:24:29.020 [2024-07-12 11:28:54.822347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.020 [2024-07-12 11:28:54.822375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.020 qpair failed and we were unable to recover it. 00:24:29.020 [2024-07-12 11:28:54.822462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.020 [2024-07-12 11:28:54.822489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.020 qpair failed and we were unable to recover it. 00:24:29.020 [2024-07-12 11:28:54.822576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.020 [2024-07-12 11:28:54.822602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.020 qpair failed and we were unable to recover it. 00:24:29.020 [2024-07-12 11:28:54.822696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.020 [2024-07-12 11:28:54.822725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.021 qpair failed and we were unable to recover it. 00:24:29.021 [2024-07-12 11:28:54.822817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.021 [2024-07-12 11:28:54.822847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.021 qpair failed and we were unable to recover it. 00:24:29.021 [2024-07-12 11:28:54.822946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.021 [2024-07-12 11:28:54.822978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.021 qpair failed and we were unable to recover it. 00:24:29.021 [2024-07-12 11:28:54.823075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.021 [2024-07-12 11:28:54.823104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.021 qpair failed and we were unable to recover it. 00:24:29.021 [2024-07-12 11:28:54.823185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.021 [2024-07-12 11:28:54.823212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.021 qpair failed and we were unable to recover it. 00:24:29.021 [2024-07-12 11:28:54.823318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.021 [2024-07-12 11:28:54.823358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.021 qpair failed and we were unable to recover it. 00:24:29.021 [2024-07-12 11:28:54.823458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.021 [2024-07-12 11:28:54.823486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.021 qpair failed and we were unable to recover it. 00:24:29.021 [2024-07-12 11:28:54.823599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.021 [2024-07-12 11:28:54.823625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.021 qpair failed and we were unable to recover it. 00:24:29.021 [2024-07-12 11:28:54.823699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.021 [2024-07-12 11:28:54.823730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.021 qpair failed and we were unable to recover it. 00:24:29.021 [2024-07-12 11:28:54.823809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.021 [2024-07-12 11:28:54.823835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.021 qpair failed and we were unable to recover it. 00:24:29.021 [2024-07-12 11:28:54.823929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.021 [2024-07-12 11:28:54.823957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.021 qpair failed and we were unable to recover it. 00:24:29.021 [2024-07-12 11:28:54.824041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.021 [2024-07-12 11:28:54.824068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.021 qpair failed and we were unable to recover it. 00:24:29.021 [2024-07-12 11:28:54.824153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.021 [2024-07-12 11:28:54.824179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.021 qpair failed and we were unable to recover it. 00:24:29.021 [2024-07-12 11:28:54.824288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.021 [2024-07-12 11:28:54.824315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.021 qpair failed and we were unable to recover it. 00:24:29.021 [2024-07-12 11:28:54.824401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.021 [2024-07-12 11:28:54.824427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.021 qpair failed and we were unable to recover it. 00:24:29.021 [2024-07-12 11:28:54.824515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.021 [2024-07-12 11:28:54.824542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.021 qpair failed and we were unable to recover it. 00:24:29.021 [2024-07-12 11:28:54.824639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.021 [2024-07-12 11:28:54.824679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.021 qpair failed and we were unable to recover it. 00:24:29.021 [2024-07-12 11:28:54.824775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.021 [2024-07-12 11:28:54.824806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.021 qpair failed and we were unable to recover it. 00:24:29.021 [2024-07-12 11:28:54.824916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.021 [2024-07-12 11:28:54.824956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.021 qpair failed and we were unable to recover it. 00:24:29.021 [2024-07-12 11:28:54.825042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.021 [2024-07-12 11:28:54.825069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.021 qpair failed and we were unable to recover it. 00:24:29.021 [2024-07-12 11:28:54.825151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.021 [2024-07-12 11:28:54.825177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.021 qpair failed and we were unable to recover it. 00:24:29.021 [2024-07-12 11:28:54.825262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.021 [2024-07-12 11:28:54.825288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.021 qpair failed and we were unable to recover it. 00:24:29.021 [2024-07-12 11:28:54.825371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.021 [2024-07-12 11:28:54.825397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.021 qpair failed and we were unable to recover it. 00:24:29.021 [2024-07-12 11:28:54.825481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.021 [2024-07-12 11:28:54.825507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.021 qpair failed and we were unable to recover it. 00:24:29.021 [2024-07-12 11:28:54.825620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.021 [2024-07-12 11:28:54.825647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.021 qpair failed and we were unable to recover it. 00:24:29.021 [2024-07-12 11:28:54.825741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.021 [2024-07-12 11:28:54.825781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.021 qpair failed and we were unable to recover it. 00:24:29.021 [2024-07-12 11:28:54.825878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.021 [2024-07-12 11:28:54.825909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.021 qpair failed and we were unable to recover it. 00:24:29.021 [2024-07-12 11:28:54.825996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.021 [2024-07-12 11:28:54.826022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.021 qpair failed and we were unable to recover it. 00:24:29.021 [2024-07-12 11:28:54.826105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.021 [2024-07-12 11:28:54.826131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.021 qpair failed and we were unable to recover it. 00:24:29.021 [2024-07-12 11:28:54.826209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.021 [2024-07-12 11:28:54.826235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.021 qpair failed and we were unable to recover it. 00:24:29.021 [2024-07-12 11:28:54.826316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.021 [2024-07-12 11:28:54.826342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.021 qpair failed and we were unable to recover it. 00:24:29.021 [2024-07-12 11:28:54.826422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.021 [2024-07-12 11:28:54.826448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.021 qpair failed and we were unable to recover it. 00:24:29.021 [2024-07-12 11:28:54.826571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.021 [2024-07-12 11:28:54.826611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.021 qpair failed and we were unable to recover it. 00:24:29.021 [2024-07-12 11:28:54.826694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.021 [2024-07-12 11:28:54.826725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.021 qpair failed and we were unable to recover it. 00:24:29.021 [2024-07-12 11:28:54.826839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.021 [2024-07-12 11:28:54.826883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.021 qpair failed and we were unable to recover it. 00:24:29.021 [2024-07-12 11:28:54.826980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-12 11:28:54.827006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.022 qpair failed and we were unable to recover it. 00:24:29.022 [2024-07-12 11:28:54.827085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-12 11:28:54.827110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.022 qpair failed and we were unable to recover it. 00:24:29.022 [2024-07-12 11:28:54.827199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-12 11:28:54.827225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.022 qpair failed and we were unable to recover it. 00:24:29.022 [2024-07-12 11:28:54.827309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-12 11:28:54.827334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.022 qpair failed and we were unable to recover it. 00:24:29.022 [2024-07-12 11:28:54.827422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-12 11:28:54.827448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.022 qpair failed and we were unable to recover it. 00:24:29.022 [2024-07-12 11:28:54.827529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-12 11:28:54.827555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.022 qpair failed and we were unable to recover it. 00:24:29.022 [2024-07-12 11:28:54.827648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-12 11:28:54.827676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.022 qpair failed and we were unable to recover it. 00:24:29.022 [2024-07-12 11:28:54.827890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-12 11:28:54.827930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.022 qpair failed and we were unable to recover it. 00:24:29.022 [2024-07-12 11:28:54.828019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-12 11:28:54.828047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.022 qpair failed and we were unable to recover it. 00:24:29.022 [2024-07-12 11:28:54.828137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-12 11:28:54.828163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.022 qpair failed and we were unable to recover it. 00:24:29.022 [2024-07-12 11:28:54.828240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-12 11:28:54.828267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.022 qpair failed and we were unable to recover it. 00:24:29.022 [2024-07-12 11:28:54.828343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-12 11:28:54.828369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.022 qpair failed and we were unable to recover it. 00:24:29.022 [2024-07-12 11:28:54.828450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-12 11:28:54.828477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.022 qpair failed and we were unable to recover it. 00:24:29.022 [2024-07-12 11:28:54.828570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-12 11:28:54.828611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.022 qpair failed and we were unable to recover it. 00:24:29.022 [2024-07-12 11:28:54.828699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-12 11:28:54.828727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.022 qpair failed and we were unable to recover it. 00:24:29.022 [2024-07-12 11:28:54.828822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-12 11:28:54.828851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.022 qpair failed and we were unable to recover it. 00:24:29.022 [2024-07-12 11:28:54.828954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-12 11:28:54.828981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.022 qpair failed and we were unable to recover it. 00:24:29.022 [2024-07-12 11:28:54.829065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-12 11:28:54.829093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.022 qpair failed and we were unable to recover it. 00:24:29.022 [2024-07-12 11:28:54.829205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-12 11:28:54.829232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.022 qpair failed and we were unable to recover it. 00:24:29.022 [2024-07-12 11:28:54.829312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-12 11:28:54.829338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.022 qpair failed and we were unable to recover it. 00:24:29.022 [2024-07-12 11:28:54.829437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-12 11:28:54.829465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.022 qpair failed and we were unable to recover it. 00:24:29.022 [2024-07-12 11:28:54.829550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-12 11:28:54.829577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.022 qpair failed and we were unable to recover it. 00:24:29.022 [2024-07-12 11:28:54.829664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-12 11:28:54.829692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.022 qpair failed and we were unable to recover it. 00:24:29.022 [2024-07-12 11:28:54.829773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-12 11:28:54.829800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.022 qpair failed and we were unable to recover it. 00:24:29.022 [2024-07-12 11:28:54.829885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-12 11:28:54.829912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.022 qpair failed and we were unable to recover it. 00:24:29.022 [2024-07-12 11:28:54.829996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-12 11:28:54.830022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.022 qpair failed and we were unable to recover it. 00:24:29.022 [2024-07-12 11:28:54.830112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-12 11:28:54.830138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.022 qpair failed and we were unable to recover it. 00:24:29.022 [2024-07-12 11:28:54.830258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-12 11:28:54.830284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.022 qpair failed and we were unable to recover it. 00:24:29.022 [2024-07-12 11:28:54.830368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-12 11:28:54.830394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.022 qpair failed and we were unable to recover it. 00:24:29.022 [2024-07-12 11:28:54.830474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-12 11:28:54.830500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.022 qpair failed and we were unable to recover it. 00:24:29.022 [2024-07-12 11:28:54.830586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-12 11:28:54.830612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.022 qpair failed and we were unable to recover it. 00:24:29.022 [2024-07-12 11:28:54.830694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-12 11:28:54.830720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.022 qpair failed and we were unable to recover it. 00:24:29.022 [2024-07-12 11:28:54.830796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-12 11:28:54.830822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.022 qpair failed and we were unable to recover it. 00:24:29.022 [2024-07-12 11:28:54.830911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-12 11:28:54.830940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.022 qpair failed and we were unable to recover it. 00:24:29.022 [2024-07-12 11:28:54.831031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-12 11:28:54.831071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.022 qpair failed and we were unable to recover it. 00:24:29.022 [2024-07-12 11:28:54.831160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-12 11:28:54.831187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.022 qpair failed and we were unable to recover it. 00:24:29.022 [2024-07-12 11:28:54.831269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-12 11:28:54.831295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.022 qpair failed and we were unable to recover it. 00:24:29.022 [2024-07-12 11:28:54.831378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-12 11:28:54.831405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.022 qpair failed and we were unable to recover it. 00:24:29.022 [2024-07-12 11:28:54.831483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-12 11:28:54.831509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.022 qpair failed and we were unable to recover it. 00:24:29.022 [2024-07-12 11:28:54.831590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-12 11:28:54.831616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.022 qpair failed and we were unable to recover it. 00:24:29.022 [2024-07-12 11:28:54.831718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.022 [2024-07-12 11:28:54.831750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.022 qpair failed and we were unable to recover it. 00:24:29.023 [2024-07-12 11:28:54.831840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-12 11:28:54.831874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.023 qpair failed and we were unable to recover it. 00:24:29.023 [2024-07-12 11:28:54.831957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-12 11:28:54.831984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.023 qpair failed and we were unable to recover it. 00:24:29.023 [2024-07-12 11:28:54.832063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-12 11:28:54.832089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.023 qpair failed and we were unable to recover it. 00:24:29.023 [2024-07-12 11:28:54.832168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-12 11:28:54.832195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.023 qpair failed and we were unable to recover it. 00:24:29.023 [2024-07-12 11:28:54.832279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-12 11:28:54.832308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.023 qpair failed and we were unable to recover it. 00:24:29.023 [2024-07-12 11:28:54.832402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-12 11:28:54.832429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.023 qpair failed and we were unable to recover it. 00:24:29.023 [2024-07-12 11:28:54.832503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-12 11:28:54.832529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.023 qpair failed and we were unable to recover it. 00:24:29.023 [2024-07-12 11:28:54.832616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-12 11:28:54.832642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.023 qpair failed and we were unable to recover it. 00:24:29.023 [2024-07-12 11:28:54.832728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-12 11:28:54.832754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.023 qpair failed and we were unable to recover it. 00:24:29.023 [2024-07-12 11:28:54.832831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-12 11:28:54.832857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.023 qpair failed and we were unable to recover it. 00:24:29.023 [2024-07-12 11:28:54.832943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-12 11:28:54.832969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.023 qpair failed and we were unable to recover it. 00:24:29.023 [2024-07-12 11:28:54.833052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-12 11:28:54.833077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.023 qpair failed and we were unable to recover it. 00:24:29.023 [2024-07-12 11:28:54.833154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-12 11:28:54.833180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.023 qpair failed and we were unable to recover it. 00:24:29.023 [2024-07-12 11:28:54.833273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-12 11:28:54.833301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.023 qpair failed and we were unable to recover it. 00:24:29.023 [2024-07-12 11:28:54.833402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-12 11:28:54.833430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.023 qpair failed and we were unable to recover it. 00:24:29.023 [2024-07-12 11:28:54.833543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-12 11:28:54.833573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.023 qpair failed and we were unable to recover it. 00:24:29.023 [2024-07-12 11:28:54.833662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-12 11:28:54.833691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.023 qpair failed and we were unable to recover it. 00:24:29.023 [2024-07-12 11:28:54.833771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-12 11:28:54.833798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.023 qpair failed and we were unable to recover it. 00:24:29.023 [2024-07-12 11:28:54.833884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-12 11:28:54.833912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.023 qpair failed and we were unable to recover it. 00:24:29.023 [2024-07-12 11:28:54.833990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-12 11:28:54.834016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.023 qpair failed and we were unable to recover it. 00:24:29.023 [2024-07-12 11:28:54.834107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-12 11:28:54.834133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.023 qpair failed and we were unable to recover it. 00:24:29.023 [2024-07-12 11:28:54.834223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-12 11:28:54.834250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.023 qpair failed and we were unable to recover it. 00:24:29.023 [2024-07-12 11:28:54.834327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-12 11:28:54.834353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.023 qpair failed and we were unable to recover it. 00:24:29.023 [2024-07-12 11:28:54.834465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-12 11:28:54.834492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.023 qpair failed and we were unable to recover it. 00:24:29.023 [2024-07-12 11:28:54.834575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-12 11:28:54.834602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.023 qpair failed and we were unable to recover it. 00:24:29.023 [2024-07-12 11:28:54.834683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-12 11:28:54.834710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.023 qpair failed and we were unable to recover it. 00:24:29.023 [2024-07-12 11:28:54.834795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-12 11:28:54.834827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.023 qpair failed and we were unable to recover it. 00:24:29.023 [2024-07-12 11:28:54.834925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-12 11:28:54.834955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.023 qpair failed and we were unable to recover it. 00:24:29.023 [2024-07-12 11:28:54.835052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-12 11:28:54.835079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.023 qpair failed and we were unable to recover it. 00:24:29.023 [2024-07-12 11:28:54.835163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-12 11:28:54.835189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.023 qpair failed and we were unable to recover it. 00:24:29.023 [2024-07-12 11:28:54.835277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-12 11:28:54.835305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.023 qpair failed and we were unable to recover it. 00:24:29.023 [2024-07-12 11:28:54.835383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-12 11:28:54.835409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.023 qpair failed and we were unable to recover it. 00:24:29.023 [2024-07-12 11:28:54.835523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-12 11:28:54.835550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.023 qpair failed and we were unable to recover it. 00:24:29.023 [2024-07-12 11:28:54.835668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-12 11:28:54.835705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.023 qpair failed and we were unable to recover it. 00:24:29.023 [2024-07-12 11:28:54.835806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-12 11:28:54.835835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.023 qpair failed and we were unable to recover it. 00:24:29.023 [2024-07-12 11:28:54.835941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-12 11:28:54.835969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.023 qpair failed and we were unable to recover it. 00:24:29.023 [2024-07-12 11:28:54.836058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-12 11:28:54.836084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.023 qpair failed and we were unable to recover it. 00:24:29.023 [2024-07-12 11:28:54.836171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-12 11:28:54.836197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.023 qpair failed and we were unable to recover it. 00:24:29.023 [2024-07-12 11:28:54.836309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-12 11:28:54.836336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.023 qpair failed and we were unable to recover it. 00:24:29.023 [2024-07-12 11:28:54.836417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-12 11:28:54.836444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.023 qpair failed and we were unable to recover it. 00:24:29.023 [2024-07-12 11:28:54.836532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.023 [2024-07-12 11:28:54.836559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.023 qpair failed and we were unable to recover it. 00:24:29.024 [2024-07-12 11:28:54.836638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.024 [2024-07-12 11:28:54.836673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.024 qpair failed and we were unable to recover it. 00:24:29.024 [2024-07-12 11:28:54.836769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.024 [2024-07-12 11:28:54.836809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.024 qpair failed and we were unable to recover it. 00:24:29.024 [2024-07-12 11:28:54.836943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.024 [2024-07-12 11:28:54.836973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.024 qpair failed and we were unable to recover it. 00:24:29.024 [2024-07-12 11:28:54.837052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.024 [2024-07-12 11:28:54.837079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.024 qpair failed and we were unable to recover it. 00:24:29.024 [2024-07-12 11:28:54.837158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.024 [2024-07-12 11:28:54.837184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.024 qpair failed and we were unable to recover it. 00:24:29.024 [2024-07-12 11:28:54.837296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.024 [2024-07-12 11:28:54.837322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.024 qpair failed and we were unable to recover it. 00:24:29.024 [2024-07-12 11:28:54.837406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.024 [2024-07-12 11:28:54.837432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.024 qpair failed and we were unable to recover it. 00:24:29.024 [2024-07-12 11:28:54.837520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.024 [2024-07-12 11:28:54.837546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.024 qpair failed and we were unable to recover it. 00:24:29.024 [2024-07-12 11:28:54.837628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.024 [2024-07-12 11:28:54.837654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.024 qpair failed and we were unable to recover it. 00:24:29.024 [2024-07-12 11:28:54.837733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.024 [2024-07-12 11:28:54.837759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.024 qpair failed and we were unable to recover it. 00:24:29.024 [2024-07-12 11:28:54.837839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.024 [2024-07-12 11:28:54.837871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.024 qpair failed and we were unable to recover it. 00:24:29.024 [2024-07-12 11:28:54.837962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.024 [2024-07-12 11:28:54.837988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.024 qpair failed and we were unable to recover it. 00:24:29.024 [2024-07-12 11:28:54.838083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.024 [2024-07-12 11:28:54.838111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.024 qpair failed and we were unable to recover it. 00:24:29.024 [2024-07-12 11:28:54.838199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.024 [2024-07-12 11:28:54.838225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.024 qpair failed and we were unable to recover it. 00:24:29.024 [2024-07-12 11:28:54.838315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.024 [2024-07-12 11:28:54.838342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.024 qpair failed and we were unable to recover it. 00:24:29.024 [2024-07-12 11:28:54.838424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.024 [2024-07-12 11:28:54.838452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.024 qpair failed and we were unable to recover it. 00:24:29.024 [2024-07-12 11:28:54.838562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.024 [2024-07-12 11:28:54.838588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.024 qpair failed and we were unable to recover it. 00:24:29.024 [2024-07-12 11:28:54.838667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.024 [2024-07-12 11:28:54.838694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.024 qpair failed and we were unable to recover it. 00:24:29.024 [2024-07-12 11:28:54.838778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.024 [2024-07-12 11:28:54.838805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.024 qpair failed and we were unable to recover it. 00:24:29.024 [2024-07-12 11:28:54.838883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.024 [2024-07-12 11:28:54.838911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.024 qpair failed and we were unable to recover it. 00:24:29.024 [2024-07-12 11:28:54.838994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.024 [2024-07-12 11:28:54.839021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.024 qpair failed and we were unable to recover it. 00:24:29.024 [2024-07-12 11:28:54.839109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.024 [2024-07-12 11:28:54.839135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.024 qpair failed and we were unable to recover it. 00:24:29.024 [2024-07-12 11:28:54.839245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.024 [2024-07-12 11:28:54.839272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.024 qpair failed and we were unable to recover it. 00:24:29.024 [2024-07-12 11:28:54.839352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.024 [2024-07-12 11:28:54.839379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.024 qpair failed and we were unable to recover it. 00:24:29.024 [2024-07-12 11:28:54.839494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.024 [2024-07-12 11:28:54.839521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.024 qpair failed and we were unable to recover it. 00:24:29.024 [2024-07-12 11:28:54.839631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.024 [2024-07-12 11:28:54.839662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.024 qpair failed and we were unable to recover it. 00:24:29.024 [2024-07-12 11:28:54.839788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.024 [2024-07-12 11:28:54.839828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.024 qpair failed and we were unable to recover it. 00:24:29.024 [2024-07-12 11:28:54.839926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.024 [2024-07-12 11:28:54.839954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.024 qpair failed and we were unable to recover it. 00:24:29.024 [2024-07-12 11:28:54.840043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.024 [2024-07-12 11:28:54.840072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.024 qpair failed and we were unable to recover it. 00:24:29.024 [2024-07-12 11:28:54.840227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.024 [2024-07-12 11:28:54.840254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.024 qpair failed and we were unable to recover it. 00:24:29.024 [2024-07-12 11:28:54.840337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.024 [2024-07-12 11:28:54.840365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.024 qpair failed and we were unable to recover it. 00:24:29.024 [2024-07-12 11:28:54.840455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.024 [2024-07-12 11:28:54.840484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.024 qpair failed and we were unable to recover it. 00:24:29.024 [2024-07-12 11:28:54.840595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.024 [2024-07-12 11:28:54.840621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.024 qpair failed and we were unable to recover it. 00:24:29.024 [2024-07-12 11:28:54.840734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.024 [2024-07-12 11:28:54.840760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.024 qpair failed and we were unable to recover it. 00:24:29.024 [2024-07-12 11:28:54.840845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.024 [2024-07-12 11:28:54.840880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.024 qpair failed and we were unable to recover it. 00:24:29.024 [2024-07-12 11:28:54.840970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.024 [2024-07-12 11:28:54.840998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.024 qpair failed and we were unable to recover it. 00:24:29.024 [2024-07-12 11:28:54.841082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.024 [2024-07-12 11:28:54.841108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.024 qpair failed and we were unable to recover it. 00:24:29.024 [2024-07-12 11:28:54.841190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.024 [2024-07-12 11:28:54.841216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.024 qpair failed and we were unable to recover it. 00:24:29.024 [2024-07-12 11:28:54.841329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.024 [2024-07-12 11:28:54.841355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.024 qpair failed and we were unable to recover it. 00:24:29.024 [2024-07-12 11:28:54.841443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.024 [2024-07-12 11:28:54.841469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.024 qpair failed and we were unable to recover it. 00:24:29.024 [2024-07-12 11:28:54.841555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.024 [2024-07-12 11:28:54.841584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.025 qpair failed and we were unable to recover it. 00:24:29.025 [2024-07-12 11:28:54.841669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.025 [2024-07-12 11:28:54.841698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.025 qpair failed and we were unable to recover it. 00:24:29.025 [2024-07-12 11:28:54.841785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.025 [2024-07-12 11:28:54.841812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.025 qpair failed and we were unable to recover it. 00:24:29.025 [2024-07-12 11:28:54.841898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.025 [2024-07-12 11:28:54.841926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.025 qpair failed and we were unable to recover it. 00:24:29.025 [2024-07-12 11:28:54.842006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.025 [2024-07-12 11:28:54.842033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.025 qpair failed and we were unable to recover it. 00:24:29.025 [2024-07-12 11:28:54.842106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.025 [2024-07-12 11:28:54.842132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.025 qpair failed and we were unable to recover it. 00:24:29.025 [2024-07-12 11:28:54.842209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.025 [2024-07-12 11:28:54.842236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.025 qpair failed and we were unable to recover it. 00:24:29.025 [2024-07-12 11:28:54.842316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.025 [2024-07-12 11:28:54.842343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.025 qpair failed and we were unable to recover it. 00:24:29.025 [2024-07-12 11:28:54.842462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.025 [2024-07-12 11:28:54.842502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.025 qpair failed and we were unable to recover it. 00:24:29.025 [2024-07-12 11:28:54.842593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.025 [2024-07-12 11:28:54.842622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.025 qpair failed and we were unable to recover it. 00:24:29.025 [2024-07-12 11:28:54.842749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.025 [2024-07-12 11:28:54.842789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.025 qpair failed and we were unable to recover it. 00:24:29.025 [2024-07-12 11:28:54.842881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.025 [2024-07-12 11:28:54.842909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.025 qpair failed and we were unable to recover it. 00:24:29.025 [2024-07-12 11:28:54.843000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.025 [2024-07-12 11:28:54.843029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.025 qpair failed and we were unable to recover it. 00:24:29.025 [2024-07-12 11:28:54.843114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.025 [2024-07-12 11:28:54.843140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.025 qpair failed and we were unable to recover it. 00:24:29.025 [2024-07-12 11:28:54.843222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.025 [2024-07-12 11:28:54.843251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.025 qpair failed and we were unable to recover it. 00:24:29.025 [2024-07-12 11:28:54.843330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.025 [2024-07-12 11:28:54.843356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.025 qpair failed and we were unable to recover it. 00:24:29.025 [2024-07-12 11:28:54.843495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.025 [2024-07-12 11:28:54.843522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.025 qpair failed and we were unable to recover it. 00:24:29.025 [2024-07-12 11:28:54.843599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.025 [2024-07-12 11:28:54.843625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.025 qpair failed and we were unable to recover it. 00:24:29.025 [2024-07-12 11:28:54.843704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.025 [2024-07-12 11:28:54.843733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.025 qpair failed and we were unable to recover it. 00:24:29.025 [2024-07-12 11:28:54.843817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.025 [2024-07-12 11:28:54.843845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.025 qpair failed and we were unable to recover it. 00:24:29.025 [2024-07-12 11:28:54.843942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.025 [2024-07-12 11:28:54.843970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.025 qpair failed and we were unable to recover it. 00:24:29.025 [2024-07-12 11:28:54.844052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.025 [2024-07-12 11:28:54.844078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.025 qpair failed and we were unable to recover it. 00:24:29.025 [2024-07-12 11:28:54.844158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.025 [2024-07-12 11:28:54.844185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.025 qpair failed and we were unable to recover it. 00:24:29.025 [2024-07-12 11:28:54.844292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.025 [2024-07-12 11:28:54.844318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.025 qpair failed and we were unable to recover it. 00:24:29.025 [2024-07-12 11:28:54.844397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.025 [2024-07-12 11:28:54.844423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.025 qpair failed and we were unable to recover it. 00:24:29.025 [2024-07-12 11:28:54.844514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.025 [2024-07-12 11:28:54.844546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.025 qpair failed and we were unable to recover it. 00:24:29.025 [2024-07-12 11:28:54.844645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.025 [2024-07-12 11:28:54.844685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.025 qpair failed and we were unable to recover it. 00:24:29.025 [2024-07-12 11:28:54.844784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.025 [2024-07-12 11:28:54.844813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.025 qpair failed and we were unable to recover it. 00:24:29.025 [2024-07-12 11:28:54.844919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.025 [2024-07-12 11:28:54.844947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.025 qpair failed and we were unable to recover it. 00:24:29.025 [2024-07-12 11:28:54.845063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.025 [2024-07-12 11:28:54.845091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.025 qpair failed and we were unable to recover it. 00:24:29.025 [2024-07-12 11:28:54.845171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.025 [2024-07-12 11:28:54.845198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.025 qpair failed and we were unable to recover it. 00:24:29.025 [2024-07-12 11:28:54.845276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.025 [2024-07-12 11:28:54.845304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.025 qpair failed and we were unable to recover it. 00:24:29.025 [2024-07-12 11:28:54.845396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.025 [2024-07-12 11:28:54.845424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.025 qpair failed and we were unable to recover it. 00:24:29.025 [2024-07-12 11:28:54.845506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.025 [2024-07-12 11:28:54.845536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.025 qpair failed and we were unable to recover it. 00:24:29.025 [2024-07-12 11:28:54.845629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.025 [2024-07-12 11:28:54.845656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.025 qpair failed and we were unable to recover it. 00:24:29.025 [2024-07-12 11:28:54.845754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.025 [2024-07-12 11:28:54.845794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.025 qpair failed and we were unable to recover it. 00:24:29.025 [2024-07-12 11:28:54.845910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.025 [2024-07-12 11:28:54.845939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.025 qpair failed and we were unable to recover it. 00:24:29.025 [2024-07-12 11:28:54.846018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.025 [2024-07-12 11:28:54.846045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.025 qpair failed and we were unable to recover it. 00:24:29.025 [2024-07-12 11:28:54.846139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.025 [2024-07-12 11:28:54.846165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.025 qpair failed and we were unable to recover it. 00:24:29.025 [2024-07-12 11:28:54.846266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.025 [2024-07-12 11:28:54.846293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.025 qpair failed and we were unable to recover it. 00:24:29.025 [2024-07-12 11:28:54.846373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.025 [2024-07-12 11:28:54.846401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.025 qpair failed and we were unable to recover it. 00:24:29.025 [2024-07-12 11:28:54.846518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.025 [2024-07-12 11:28:54.846545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.026 qpair failed and we were unable to recover it. 00:24:29.026 [2024-07-12 11:28:54.846625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.026 [2024-07-12 11:28:54.846652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.026 qpair failed and we were unable to recover it. 00:24:29.026 [2024-07-12 11:28:54.846726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.026 [2024-07-12 11:28:54.846752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.026 qpair failed and we were unable to recover it. 00:24:29.026 [2024-07-12 11:28:54.846837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.026 [2024-07-12 11:28:54.846863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.026 qpair failed and we were unable to recover it. 00:24:29.026 [2024-07-12 11:28:54.846957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.026 [2024-07-12 11:28:54.846984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.026 qpair failed and we were unable to recover it. 00:24:29.026 [2024-07-12 11:28:54.847066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.026 [2024-07-12 11:28:54.847093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.026 qpair failed and we were unable to recover it. 00:24:29.026 [2024-07-12 11:28:54.847181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.026 [2024-07-12 11:28:54.847209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.026 qpair failed and we were unable to recover it. 00:24:29.026 [2024-07-12 11:28:54.847297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.026 [2024-07-12 11:28:54.847325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.026 qpair failed and we were unable to recover it. 00:24:29.026 [2024-07-12 11:28:54.847432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.026 [2024-07-12 11:28:54.847459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.026 qpair failed and we were unable to recover it. 00:24:29.026 [2024-07-12 11:28:54.847576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.026 [2024-07-12 11:28:54.847602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.026 qpair failed and we were unable to recover it. 00:24:29.026 [2024-07-12 11:28:54.847691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.026 [2024-07-12 11:28:54.847732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.026 qpair failed and we were unable to recover it. 00:24:29.026 [2024-07-12 11:28:54.847855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.026 [2024-07-12 11:28:54.847898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.026 qpair failed and we were unable to recover it. 00:24:29.026 [2024-07-12 11:28:54.848010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.026 [2024-07-12 11:28:54.848036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.026 qpair failed and we were unable to recover it. 00:24:29.026 [2024-07-12 11:28:54.848118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.026 [2024-07-12 11:28:54.848145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.026 qpair failed and we were unable to recover it. 00:24:29.026 [2024-07-12 11:28:54.848221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.026 [2024-07-12 11:28:54.848247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.026 qpair failed and we were unable to recover it. 00:24:29.026 [2024-07-12 11:28:54.848361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.026 [2024-07-12 11:28:54.848391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.026 qpair failed and we were unable to recover it. 00:24:29.026 [2024-07-12 11:28:54.848509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.026 [2024-07-12 11:28:54.848537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.026 qpair failed and we were unable to recover it. 00:24:29.026 [2024-07-12 11:28:54.848623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.026 [2024-07-12 11:28:54.848652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.026 qpair failed and we were unable to recover it. 00:24:29.026 [2024-07-12 11:28:54.848770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.026 [2024-07-12 11:28:54.848797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.026 qpair failed and we were unable to recover it. 00:24:29.026 [2024-07-12 11:28:54.848886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.026 [2024-07-12 11:28:54.848915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.026 qpair failed and we were unable to recover it. 00:24:29.026 [2024-07-12 11:28:54.849008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.026 [2024-07-12 11:28:54.849037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.026 qpair failed and we were unable to recover it. 00:24:29.026 [2024-07-12 11:28:54.849117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.026 [2024-07-12 11:28:54.849144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.026 qpair failed and we were unable to recover it. 00:24:29.026 [2024-07-12 11:28:54.849249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.026 [2024-07-12 11:28:54.849275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.026 qpair failed and we were unable to recover it. 00:24:29.026 [2024-07-12 11:28:54.849359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.026 [2024-07-12 11:28:54.849386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.026 qpair failed and we were unable to recover it. 00:24:29.026 [2024-07-12 11:28:54.849468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.026 [2024-07-12 11:28:54.849497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.026 qpair failed and we were unable to recover it. 00:24:29.026 [2024-07-12 11:28:54.849586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.026 [2024-07-12 11:28:54.849613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.026 qpair failed and we were unable to recover it. 00:24:29.026 [2024-07-12 11:28:54.849728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.026 [2024-07-12 11:28:54.849755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.026 qpair failed and we were unable to recover it. 00:24:29.026 [2024-07-12 11:28:54.849878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.026 [2024-07-12 11:28:54.849906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.026 qpair failed and we were unable to recover it. 00:24:29.026 [2024-07-12 11:28:54.849989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.026 [2024-07-12 11:28:54.850016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.026 qpair failed and we were unable to recover it. 00:24:29.026 [2024-07-12 11:28:54.850100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.026 [2024-07-12 11:28:54.850129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.026 qpair failed and we were unable to recover it. 00:24:29.026 [2024-07-12 11:28:54.850213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.026 [2024-07-12 11:28:54.850242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.026 qpair failed and we were unable to recover it. 00:24:29.026 [2024-07-12 11:28:54.850340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.026 [2024-07-12 11:28:54.850369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.026 qpair failed and we were unable to recover it. 00:24:29.026 [2024-07-12 11:28:54.850452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.026 [2024-07-12 11:28:54.850478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.026 qpair failed and we were unable to recover it. 00:24:29.026 [2024-07-12 11:28:54.850560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.026 [2024-07-12 11:28:54.850586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.026 qpair failed and we were unable to recover it. 00:24:29.026 [2024-07-12 11:28:54.850665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.026 [2024-07-12 11:28:54.850691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.026 qpair failed and we were unable to recover it. 00:24:29.026 [2024-07-12 11:28:54.850764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.026 [2024-07-12 11:28:54.850790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.026 qpair failed and we were unable to recover it. 00:24:29.027 [2024-07-12 11:28:54.850871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.027 [2024-07-12 11:28:54.850898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.027 qpair failed and we were unable to recover it. 00:24:29.027 [2024-07-12 11:28:54.850980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.027 [2024-07-12 11:28:54.851006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.027 qpair failed and we were unable to recover it. 00:24:29.027 [2024-07-12 11:28:54.851098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.027 [2024-07-12 11:28:54.851127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.027 qpair failed and we were unable to recover it. 00:24:29.027 [2024-07-12 11:28:54.851213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.027 [2024-07-12 11:28:54.851241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.027 qpair failed and we were unable to recover it. 00:24:29.027 [2024-07-12 11:28:54.851360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.027 [2024-07-12 11:28:54.851387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.027 qpair failed and we were unable to recover it. 00:24:29.027 [2024-07-12 11:28:54.851476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.027 [2024-07-12 11:28:54.851502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.027 qpair failed and we were unable to recover it. 00:24:29.027 [2024-07-12 11:28:54.851585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.027 [2024-07-12 11:28:54.851610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.027 qpair failed and we were unable to recover it. 00:24:29.027 [2024-07-12 11:28:54.851692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.027 [2024-07-12 11:28:54.851718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.027 qpair failed and we were unable to recover it. 00:24:29.027 [2024-07-12 11:28:54.851796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.027 [2024-07-12 11:28:54.851822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.027 qpair failed and we were unable to recover it. 00:24:29.027 [2024-07-12 11:28:54.851912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.027 [2024-07-12 11:28:54.851939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.027 qpair failed and we were unable to recover it. 00:24:29.027 [2024-07-12 11:28:54.852020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.027 [2024-07-12 11:28:54.852046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.027 qpair failed and we were unable to recover it. 00:24:29.027 [2024-07-12 11:28:54.852238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.027 [2024-07-12 11:28:54.852263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.027 qpair failed and we were unable to recover it. 00:24:29.027 [2024-07-12 11:28:54.852342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.027 [2024-07-12 11:28:54.852368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.027 qpair failed and we were unable to recover it. 00:24:29.027 [2024-07-12 11:28:54.852473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.027 [2024-07-12 11:28:54.852499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.027 qpair failed and we were unable to recover it. 00:24:29.027 [2024-07-12 11:28:54.852612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.027 [2024-07-12 11:28:54.852640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.027 qpair failed and we were unable to recover it. 00:24:29.027 [2024-07-12 11:28:54.852723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.027 [2024-07-12 11:28:54.852752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.027 qpair failed and we were unable to recover it. 00:24:29.027 [2024-07-12 11:28:54.852853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.027 [2024-07-12 11:28:54.852896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.027 qpair failed and we were unable to recover it. 00:24:29.027 [2024-07-12 11:28:54.852982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.027 [2024-07-12 11:28:54.853009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.027 qpair failed and we were unable to recover it. 00:24:29.027 [2024-07-12 11:28:54.853090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.027 [2024-07-12 11:28:54.853116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.027 qpair failed and we were unable to recover it. 00:24:29.027 [2024-07-12 11:28:54.853253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.027 [2024-07-12 11:28:54.853280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.027 qpair failed and we were unable to recover it. 00:24:29.027 [2024-07-12 11:28:54.853362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.027 [2024-07-12 11:28:54.853390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.027 qpair failed and we were unable to recover it. 00:24:29.027 [2024-07-12 11:28:54.853473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.027 [2024-07-12 11:28:54.853499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.027 qpair failed and we were unable to recover it. 00:24:29.027 [2024-07-12 11:28:54.853610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.027 [2024-07-12 11:28:54.853638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.027 qpair failed and we were unable to recover it. 00:24:29.027 [2024-07-12 11:28:54.853720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.027 [2024-07-12 11:28:54.853748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.027 qpair failed and we were unable to recover it. 00:24:29.027 [2024-07-12 11:28:54.853825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.027 [2024-07-12 11:28:54.853852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.027 qpair failed and we were unable to recover it. 00:24:29.027 [2024-07-12 11:28:54.853976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.027 [2024-07-12 11:28:54.854002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.027 qpair failed and we were unable to recover it. 00:24:29.027 [2024-07-12 11:28:54.854084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.027 [2024-07-12 11:28:54.854110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.027 qpair failed and we were unable to recover it. 00:24:29.027 [2024-07-12 11:28:54.854222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.027 [2024-07-12 11:28:54.854250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.027 qpair failed and we were unable to recover it. 00:24:29.027 [2024-07-12 11:28:54.854326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.027 [2024-07-12 11:28:54.854354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.027 qpair failed and we were unable to recover it. 00:24:29.027 [2024-07-12 11:28:54.854447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.027 [2024-07-12 11:28:54.854475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.027 qpair failed and we were unable to recover it. 00:24:29.027 [2024-07-12 11:28:54.854552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.027 [2024-07-12 11:28:54.854578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.027 qpair failed and we were unable to recover it. 00:24:29.027 [2024-07-12 11:28:54.854664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.027 [2024-07-12 11:28:54.854690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.027 qpair failed and we were unable to recover it. 00:24:29.027 [2024-07-12 11:28:54.854796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.027 [2024-07-12 11:28:54.854822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.027 qpair failed and we were unable to recover it. 00:24:29.027 [2024-07-12 11:28:54.854943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.027 [2024-07-12 11:28:54.854971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.027 qpair failed and we were unable to recover it. 00:24:29.027 [2024-07-12 11:28:54.855078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.027 [2024-07-12 11:28:54.855105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.027 qpair failed and we were unable to recover it. 00:24:29.027 [2024-07-12 11:28:54.855195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.027 [2024-07-12 11:28:54.855223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.027 qpair failed and we were unable to recover it. 00:24:29.027 [2024-07-12 11:28:54.855315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.027 [2024-07-12 11:28:54.855343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.027 qpair failed and we were unable to recover it. 00:24:29.027 [2024-07-12 11:28:54.855426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.027 [2024-07-12 11:28:54.855454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.027 qpair failed and we were unable to recover it. 00:24:29.027 [2024-07-12 11:28:54.855548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.027 [2024-07-12 11:28:54.855576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.027 qpair failed and we were unable to recover it. 00:24:29.027 [2024-07-12 11:28:54.855657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.027 [2024-07-12 11:28:54.855683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.027 qpair failed and we were unable to recover it. 00:24:29.027 [2024-07-12 11:28:54.855791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.027 [2024-07-12 11:28:54.855817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.028 qpair failed and we were unable to recover it. 00:24:29.028 [2024-07-12 11:28:54.855928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.028 [2024-07-12 11:28:54.855955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.028 qpair failed and we were unable to recover it. 00:24:29.028 [2024-07-12 11:28:54.856042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.028 [2024-07-12 11:28:54.856073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.028 qpair failed and we were unable to recover it. 00:24:29.028 [2024-07-12 11:28:54.856157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.028 [2024-07-12 11:28:54.856183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.028 qpair failed and we were unable to recover it. 00:24:29.028 [2024-07-12 11:28:54.856261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.028 [2024-07-12 11:28:54.856287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.028 qpair failed and we were unable to recover it. 00:24:29.028 [2024-07-12 11:28:54.856362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.028 [2024-07-12 11:28:54.856388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.028 qpair failed and we were unable to recover it. 00:24:29.028 [2024-07-12 11:28:54.856464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.028 [2024-07-12 11:28:54.856490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.028 qpair failed and we were unable to recover it. 00:24:29.028 [2024-07-12 11:28:54.856565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.028 [2024-07-12 11:28:54.856591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.028 qpair failed and we were unable to recover it. 00:24:29.028 [2024-07-12 11:28:54.856689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.028 [2024-07-12 11:28:54.856729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.028 qpair failed and we were unable to recover it. 00:24:29.028 [2024-07-12 11:28:54.856816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.028 [2024-07-12 11:28:54.856845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.028 qpair failed and we were unable to recover it. 00:24:29.028 [2024-07-12 11:28:54.857001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.028 [2024-07-12 11:28:54.857029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.028 qpair failed and we were unable to recover it. 00:24:29.028 [2024-07-12 11:28:54.857117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.028 [2024-07-12 11:28:54.857144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.028 qpair failed and we were unable to recover it. 00:24:29.028 [2024-07-12 11:28:54.857225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.028 [2024-07-12 11:28:54.857252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.028 qpair failed and we were unable to recover it. 00:24:29.028 [2024-07-12 11:28:54.857358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.028 [2024-07-12 11:28:54.857385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.028 qpair failed and we were unable to recover it. 00:24:29.028 [2024-07-12 11:28:54.857470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.028 [2024-07-12 11:28:54.857497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.028 qpair failed and we were unable to recover it. 00:24:29.028 [2024-07-12 11:28:54.857586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.028 [2024-07-12 11:28:54.857626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.028 qpair failed and we were unable to recover it. 00:24:29.028 [2024-07-12 11:28:54.857745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.028 [2024-07-12 11:28:54.857773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.028 qpair failed and we were unable to recover it. 00:24:29.028 [2024-07-12 11:28:54.857863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.028 [2024-07-12 11:28:54.857897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.028 qpair failed and we were unable to recover it. 00:24:29.028 [2024-07-12 11:28:54.858037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.028 [2024-07-12 11:28:54.858064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.028 qpair failed and we were unable to recover it. 00:24:29.028 [2024-07-12 11:28:54.858144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.028 [2024-07-12 11:28:54.858171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.028 qpair failed and we were unable to recover it. 00:24:29.028 [2024-07-12 11:28:54.858252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.028 [2024-07-12 11:28:54.858279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.028 qpair failed and we were unable to recover it. 00:24:29.028 [2024-07-12 11:28:54.858361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.028 [2024-07-12 11:28:54.858389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.028 qpair failed and we were unable to recover it. 00:24:29.028 [2024-07-12 11:28:54.858469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.028 [2024-07-12 11:28:54.858499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.028 qpair failed and we were unable to recover it. 00:24:29.028 [2024-07-12 11:28:54.858617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.028 [2024-07-12 11:28:54.858644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.028 qpair failed and we were unable to recover it. 00:24:29.028 [2024-07-12 11:28:54.858735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.028 [2024-07-12 11:28:54.858762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.028 qpair failed and we were unable to recover it. 00:24:29.028 [2024-07-12 11:28:54.858844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.028 [2024-07-12 11:28:54.858877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.028 qpair failed and we were unable to recover it. 00:24:29.028 [2024-07-12 11:28:54.858987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.028 [2024-07-12 11:28:54.859013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.028 qpair failed and we were unable to recover it. 00:24:29.028 [2024-07-12 11:28:54.859122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.028 [2024-07-12 11:28:54.859148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.028 qpair failed and we were unable to recover it. 00:24:29.028 [2024-07-12 11:28:54.859223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.028 [2024-07-12 11:28:54.859249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.028 qpair failed and we were unable to recover it. 00:24:29.028 [2024-07-12 11:28:54.859336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.028 [2024-07-12 11:28:54.859366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.028 qpair failed and we were unable to recover it. 00:24:29.028 [2024-07-12 11:28:54.859459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.028 [2024-07-12 11:28:54.859488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.028 qpair failed and we were unable to recover it. 00:24:29.028 [2024-07-12 11:28:54.859584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.028 [2024-07-12 11:28:54.859624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.028 qpair failed and we were unable to recover it. 00:24:29.028 [2024-07-12 11:28:54.859713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.028 [2024-07-12 11:28:54.859740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.028 qpair failed and we were unable to recover it. 00:24:29.028 [2024-07-12 11:28:54.859815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.028 [2024-07-12 11:28:54.859841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.028 qpair failed and we were unable to recover it. 00:24:29.028 [2024-07-12 11:28:54.859939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.028 [2024-07-12 11:28:54.859967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.028 qpair failed and we were unable to recover it. 00:24:29.028 [2024-07-12 11:28:54.860048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.028 [2024-07-12 11:28:54.860073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.028 qpair failed and we were unable to recover it. 00:24:29.028 [2024-07-12 11:28:54.860166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.028 [2024-07-12 11:28:54.860192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.028 qpair failed and we were unable to recover it. 00:24:29.028 [2024-07-12 11:28:54.860270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.028 [2024-07-12 11:28:54.860295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.028 qpair failed and we were unable to recover it. 00:24:29.028 [2024-07-12 11:28:54.860391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.028 [2024-07-12 11:28:54.860431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.028 qpair failed and we were unable to recover it. 00:24:29.028 [2024-07-12 11:28:54.860541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.028 [2024-07-12 11:28:54.860569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.028 qpair failed and we were unable to recover it. 00:24:29.028 [2024-07-12 11:28:54.860710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.028 [2024-07-12 11:28:54.860736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.028 qpair failed and we were unable to recover it. 00:24:29.028 [2024-07-12 11:28:54.860820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.029 [2024-07-12 11:28:54.860846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.029 qpair failed and we were unable to recover it. 00:24:29.029 [2024-07-12 11:28:54.860948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.029 [2024-07-12 11:28:54.860975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.029 qpair failed and we were unable to recover it. 00:24:29.029 [2024-07-12 11:28:54.861063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.029 [2024-07-12 11:28:54.861090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.029 qpair failed and we were unable to recover it. 00:24:29.029 [2024-07-12 11:28:54.861206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.029 [2024-07-12 11:28:54.861233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.029 qpair failed and we were unable to recover it. 00:24:29.029 [2024-07-12 11:28:54.861339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.029 [2024-07-12 11:28:54.861365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.029 qpair failed and we were unable to recover it. 00:24:29.029 [2024-07-12 11:28:54.861492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.029 [2024-07-12 11:28:54.861532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.029 qpair failed and we were unable to recover it. 00:24:29.029 [2024-07-12 11:28:54.861650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.029 [2024-07-12 11:28:54.861678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.029 qpair failed and we were unable to recover it. 00:24:29.029 [2024-07-12 11:28:54.861763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.029 [2024-07-12 11:28:54.861792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.029 qpair failed and we were unable to recover it. 00:24:29.029 [2024-07-12 11:28:54.861909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.029 [2024-07-12 11:28:54.861937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.029 qpair failed and we were unable to recover it. 00:24:29.029 [2024-07-12 11:28:54.862027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.029 [2024-07-12 11:28:54.862052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.029 qpair failed and we were unable to recover it. 00:24:29.029 [2024-07-12 11:28:54.862137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.029 [2024-07-12 11:28:54.862164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.029 qpair failed and we were unable to recover it. 00:24:29.029 [2024-07-12 11:28:54.862245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.029 [2024-07-12 11:28:54.862272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.029 qpair failed and we were unable to recover it. 00:24:29.029 [2024-07-12 11:28:54.862351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.029 [2024-07-12 11:28:54.862377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.029 qpair failed and we were unable to recover it. 00:24:29.029 [2024-07-12 11:28:54.862460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.029 [2024-07-12 11:28:54.862488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.029 qpair failed and we were unable to recover it. 00:24:29.029 [2024-07-12 11:28:54.862614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.029 [2024-07-12 11:28:54.862640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.029 qpair failed and we were unable to recover it. 00:24:29.029 [2024-07-12 11:28:54.862726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.029 [2024-07-12 11:28:54.862756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.029 qpair failed and we were unable to recover it. 00:24:29.029 [2024-07-12 11:28:54.862844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.029 [2024-07-12 11:28:54.862881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.029 qpair failed and we were unable to recover it. 00:24:29.029 [2024-07-12 11:28:54.862968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.029 [2024-07-12 11:28:54.862996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.029 qpair failed and we were unable to recover it. 00:24:29.029 [2024-07-12 11:28:54.863079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.029 [2024-07-12 11:28:54.863105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.029 qpair failed and we were unable to recover it. 00:24:29.029 [2024-07-12 11:28:54.863185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.029 [2024-07-12 11:28:54.863211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.029 qpair failed and we were unable to recover it. 00:24:29.029 [2024-07-12 11:28:54.863290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.029 [2024-07-12 11:28:54.863317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.029 qpair failed and we were unable to recover it. 00:24:29.029 [2024-07-12 11:28:54.863426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.029 [2024-07-12 11:28:54.863452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.029 qpair failed and we were unable to recover it. 00:24:29.029 [2024-07-12 11:28:54.863542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.029 [2024-07-12 11:28:54.863571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.029 qpair failed and we were unable to recover it. 00:24:29.029 [2024-07-12 11:28:54.863653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.029 [2024-07-12 11:28:54.863682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.029 qpair failed and we were unable to recover it. 00:24:29.029 [2024-07-12 11:28:54.863779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.029 [2024-07-12 11:28:54.863818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.029 qpair failed and we were unable to recover it. 00:24:29.029 [2024-07-12 11:28:54.863926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.029 [2024-07-12 11:28:54.863956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.029 qpair failed and we were unable to recover it. 00:24:29.029 [2024-07-12 11:28:54.864045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.029 [2024-07-12 11:28:54.864072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.029 qpair failed and we were unable to recover it. 00:24:29.029 [2024-07-12 11:28:54.864155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.029 [2024-07-12 11:28:54.864180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.029 qpair failed and we were unable to recover it. 00:24:29.029 [2024-07-12 11:28:54.864264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.029 [2024-07-12 11:28:54.864293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.029 qpair failed and we were unable to recover it. 00:24:29.029 [2024-07-12 11:28:54.864413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.029 [2024-07-12 11:28:54.864439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.029 qpair failed and we were unable to recover it. 00:24:29.029 [2024-07-12 11:28:54.864559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.029 [2024-07-12 11:28:54.864588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.029 qpair failed and we were unable to recover it. 00:24:29.029 [2024-07-12 11:28:54.864669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.029 [2024-07-12 11:28:54.864696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.029 qpair failed and we were unable to recover it. 00:24:29.029 [2024-07-12 11:28:54.864781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.029 [2024-07-12 11:28:54.864809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.029 qpair failed and we were unable to recover it. 00:24:29.029 [2024-07-12 11:28:54.864894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.029 [2024-07-12 11:28:54.864922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.029 qpair failed and we were unable to recover it. 00:24:29.029 [2024-07-12 11:28:54.865011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.029 [2024-07-12 11:28:54.865038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.029 qpair failed and we were unable to recover it. 00:24:29.029 [2024-07-12 11:28:54.865120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.029 [2024-07-12 11:28:54.865147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.029 qpair failed and we were unable to recover it. 00:24:29.029 [2024-07-12 11:28:54.865255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.029 [2024-07-12 11:28:54.865282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.029 qpair failed and we were unable to recover it. 00:24:29.029 [2024-07-12 11:28:54.865369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.029 [2024-07-12 11:28:54.865395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.029 qpair failed and we were unable to recover it. 00:24:29.029 [2024-07-12 11:28:54.865479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.029 [2024-07-12 11:28:54.865505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.029 qpair failed and we were unable to recover it. 00:24:29.029 [2024-07-12 11:28:54.865584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.029 [2024-07-12 11:28:54.865610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.029 qpair failed and we were unable to recover it. 00:24:29.029 [2024-07-12 11:28:54.865761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.029 [2024-07-12 11:28:54.865801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.029 qpair failed and we were unable to recover it. 00:24:29.029 [2024-07-12 11:28:54.865924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.030 [2024-07-12 11:28:54.865953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.030 qpair failed and we were unable to recover it. 00:24:29.030 [2024-07-12 11:28:54.866047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.030 [2024-07-12 11:28:54.866075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.030 qpair failed and we were unable to recover it. 00:24:29.030 [2024-07-12 11:28:54.866161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.030 [2024-07-12 11:28:54.866188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.030 qpair failed and we were unable to recover it. 00:24:29.030 [2024-07-12 11:28:54.866265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.030 [2024-07-12 11:28:54.866292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.030 qpair failed and we were unable to recover it. 00:24:29.030 [2024-07-12 11:28:54.866400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.030 [2024-07-12 11:28:54.866427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.030 qpair failed and we were unable to recover it. 00:24:29.030 [2024-07-12 11:28:54.866538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.030 [2024-07-12 11:28:54.866566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.030 qpair failed and we were unable to recover it. 00:24:29.030 [2024-07-12 11:28:54.866672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.030 [2024-07-12 11:28:54.866713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.030 qpair failed and we were unable to recover it. 00:24:29.030 [2024-07-12 11:28:54.866828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.030 [2024-07-12 11:28:54.866855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.030 qpair failed and we were unable to recover it. 00:24:29.030 [2024-07-12 11:28:54.866944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.030 [2024-07-12 11:28:54.866970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.030 qpair failed and we were unable to recover it. 00:24:29.030 [2024-07-12 11:28:54.867081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.030 [2024-07-12 11:28:54.867107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.030 qpair failed and we were unable to recover it. 00:24:29.030 [2024-07-12 11:28:54.867194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.030 [2024-07-12 11:28:54.867222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.030 qpair failed and we were unable to recover it. 00:24:29.030 [2024-07-12 11:28:54.867306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.030 [2024-07-12 11:28:54.867335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.030 qpair failed and we were unable to recover it. 00:24:29.030 [2024-07-12 11:28:54.867423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.030 [2024-07-12 11:28:54.867450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.030 qpair failed and we were unable to recover it. 00:24:29.030 [2024-07-12 11:28:54.867574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.030 [2024-07-12 11:28:54.867614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.030 qpair failed and we were unable to recover it. 00:24:29.030 [2024-07-12 11:28:54.867705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.030 [2024-07-12 11:28:54.867738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.030 qpair failed and we were unable to recover it. 00:24:29.030 [2024-07-12 11:28:54.867822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.030 [2024-07-12 11:28:54.867849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.030 qpair failed and we were unable to recover it. 00:24:29.030 [2024-07-12 11:28:54.867981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.030 [2024-07-12 11:28:54.868009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.030 qpair failed and we were unable to recover it. 00:24:29.030 [2024-07-12 11:28:54.868098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.030 [2024-07-12 11:28:54.868125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.030 qpair failed and we were unable to recover it. 00:24:29.030 [2024-07-12 11:28:54.868236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.030 [2024-07-12 11:28:54.868263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.030 qpair failed and we were unable to recover it. 00:24:29.030 [2024-07-12 11:28:54.868345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.030 [2024-07-12 11:28:54.868372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.030 qpair failed and we were unable to recover it. 00:24:29.030 [2024-07-12 11:28:54.868449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.030 [2024-07-12 11:28:54.868475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.030 qpair failed and we were unable to recover it. 00:24:29.030 [2024-07-12 11:28:54.868555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.030 [2024-07-12 11:28:54.868581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.030 qpair failed and we were unable to recover it. 00:24:29.030 [2024-07-12 11:28:54.868661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.030 [2024-07-12 11:28:54.868688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.030 qpair failed and we were unable to recover it. 00:24:29.030 [2024-07-12 11:28:54.868768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.030 [2024-07-12 11:28:54.868796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.030 qpair failed and we were unable to recover it. 00:24:29.030 [2024-07-12 11:28:54.868884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.030 [2024-07-12 11:28:54.868914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.030 qpair failed and we were unable to recover it. 00:24:29.030 [2024-07-12 11:28:54.868997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.030 [2024-07-12 11:28:54.869024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.030 qpair failed and we were unable to recover it. 00:24:29.030 [2024-07-12 11:28:54.869105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.030 [2024-07-12 11:28:54.869131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.030 qpair failed and we were unable to recover it. 00:24:29.030 [2024-07-12 11:28:54.869257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.030 [2024-07-12 11:28:54.869284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.030 qpair failed and we were unable to recover it. 00:24:29.030 [2024-07-12 11:28:54.869382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.030 [2024-07-12 11:28:54.869423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.030 qpair failed and we were unable to recover it. 00:24:29.030 [2024-07-12 11:28:54.869541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.030 [2024-07-12 11:28:54.869569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.030 qpair failed and we were unable to recover it. 00:24:29.030 [2024-07-12 11:28:54.869658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.030 [2024-07-12 11:28:54.869686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.030 qpair failed and we were unable to recover it. 00:24:29.030 [2024-07-12 11:28:54.869767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.030 [2024-07-12 11:28:54.869794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.030 qpair failed and we were unable to recover it. 00:24:29.030 [2024-07-12 11:28:54.869878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.030 [2024-07-12 11:28:54.869905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.030 qpair failed and we were unable to recover it. 00:24:29.030 [2024-07-12 11:28:54.870000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.030 [2024-07-12 11:28:54.870026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.030 qpair failed and we were unable to recover it. 00:24:29.030 [2024-07-12 11:28:54.870144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.030 [2024-07-12 11:28:54.870171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.030 qpair failed and we were unable to recover it. 00:24:29.030 [2024-07-12 11:28:54.870259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-12 11:28:54.870285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-12 11:28:54.870368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-12 11:28:54.870395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-12 11:28:54.870482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-12 11:28:54.870510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-12 11:28:54.870601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-12 11:28:54.870630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-12 11:28:54.870724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-12 11:28:54.870765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-12 11:28:54.870881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-12 11:28:54.870909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-12 11:28:54.871028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-12 11:28:54.871055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-12 11:28:54.871145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-12 11:28:54.871171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-12 11:28:54.871249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-12 11:28:54.871276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-12 11:28:54.871357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-12 11:28:54.871383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-12 11:28:54.871465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-12 11:28:54.871491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-12 11:28:54.871567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-12 11:28:54.871594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-12 11:28:54.871708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-12 11:28:54.871736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-12 11:28:54.871853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-12 11:28:54.871890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-12 11:28:54.871996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-12 11:28:54.872036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-12 11:28:54.872127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-12 11:28:54.872154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-12 11:28:54.872235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-12 11:28:54.872261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-12 11:28:54.872339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-12 11:28:54.872366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-12 11:28:54.872472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-12 11:28:54.872498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-12 11:28:54.872583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-12 11:28:54.872615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-12 11:28:54.872711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-12 11:28:54.872751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-12 11:28:54.872884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-12 11:28:54.872913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-12 11:28:54.873010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-12 11:28:54.873037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-12 11:28:54.873116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-12 11:28:54.873143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-12 11:28:54.873251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-12 11:28:54.873277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-12 11:28:54.873362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-12 11:28:54.873391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-12 11:28:54.873547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-12 11:28:54.873575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-12 11:28:54.873671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-12 11:28:54.873699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-12 11:28:54.873811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-12 11:28:54.873836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-12 11:28:54.873933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-12 11:28:54.873958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-12 11:28:54.874041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-12 11:28:54.874066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-12 11:28:54.874172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-12 11:28:54.874198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-12 11:28:54.874274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-12 11:28:54.874299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-12 11:28:54.874395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-12 11:28:54.874420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-12 11:28:54.874499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-12 11:28:54.874522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-12 11:28:54.874626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-12 11:28:54.874650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-12 11:28:54.874729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-12 11:28:54.874753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-12 11:28:54.874833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-12 11:28:54.874859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-12 11:28:54.874957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-12 11:28:54.874984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-12 11:28:54.875091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-12 11:28:54.875116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-12 11:28:54.875202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.031 [2024-07-12 11:28:54.875226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.031 qpair failed and we were unable to recover it. 00:24:29.031 [2024-07-12 11:28:54.875300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-12 11:28:54.875324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-12 11:28:54.875408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-12 11:28:54.875436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-12 11:28:54.875535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-12 11:28:54.875561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-12 11:28:54.875651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-12 11:28:54.875676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-12 11:28:54.875757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-12 11:28:54.875781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-12 11:28:54.875869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-12 11:28:54.875907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-12 11:28:54.875992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-12 11:28:54.876017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-12 11:28:54.876100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-12 11:28:54.876124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-12 11:28:54.876211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-12 11:28:54.876236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-12 11:28:54.876359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-12 11:28:54.876383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-12 11:28:54.876466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-12 11:28:54.876490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-12 11:28:54.876576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-12 11:28:54.876604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-12 11:28:54.876700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-12 11:28:54.876737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-12 11:28:54.876857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-12 11:28:54.876890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-12 11:28:54.876977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-12 11:28:54.877002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-12 11:28:54.877100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-12 11:28:54.877126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-12 11:28:54.877237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-12 11:28:54.877263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-12 11:28:54.877348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-12 11:28:54.877374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-12 11:28:54.877486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-12 11:28:54.877512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-12 11:28:54.877598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-12 11:28:54.877623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-12 11:28:54.877740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-12 11:28:54.877767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-12 11:28:54.877855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-12 11:28:54.877891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-12 11:28:54.877973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-12 11:28:54.878000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-12 11:28:54.878108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-12 11:28:54.878135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-12 11:28:54.878212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-12 11:28:54.878239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-12 11:28:54.878322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-12 11:28:54.878352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-12 11:28:54.878434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-12 11:28:54.878462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-12 11:28:54.878548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-12 11:28:54.878576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-12 11:28:54.878653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-12 11:28:54.878680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-12 11:28:54.878791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-12 11:28:54.878818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-12 11:28:54.878900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-12 11:28:54.878929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-12 11:28:54.879013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-12 11:28:54.879040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-12 11:28:54.879125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-12 11:28:54.879157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-12 11:28:54.879275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-12 11:28:54.879303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-12 11:28:54.879397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-12 11:28:54.879424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-12 11:28:54.879501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-12 11:28:54.879528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-12 11:28:54.879609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-12 11:28:54.879636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-12 11:28:54.879719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-12 11:28:54.879746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-12 11:28:54.879831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-12 11:28:54.879858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-12 11:28:54.879951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-12 11:28:54.879978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.032 qpair failed and we were unable to recover it. 00:24:29.032 [2024-07-12 11:28:54.880060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.032 [2024-07-12 11:28:54.880086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-12 11:28:54.880168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-12 11:28:54.880195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-12 11:28:54.880272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-12 11:28:54.880299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-12 11:28:54.880385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-12 11:28:54.880413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-12 11:28:54.880505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-12 11:28:54.880532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-12 11:28:54.880612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-12 11:28:54.880639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-12 11:28:54.880736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-12 11:28:54.880777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-12 11:28:54.880892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-12 11:28:54.880922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-12 11:28:54.881013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-12 11:28:54.881042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-12 11:28:54.881127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-12 11:28:54.881154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-12 11:28:54.881234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-12 11:28:54.881261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-12 11:28:54.881345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-12 11:28:54.881371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-12 11:28:54.881467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-12 11:28:54.881493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-12 11:28:54.881585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-12 11:28:54.881625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-12 11:28:54.881716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-12 11:28:54.881743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-12 11:28:54.881828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-12 11:28:54.881854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-12 11:28:54.881965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-12 11:28:54.881993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-12 11:28:54.882082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-12 11:28:54.882109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-12 11:28:54.882194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-12 11:28:54.882220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-12 11:28:54.882308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-12 11:28:54.882335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-12 11:28:54.882425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-12 11:28:54.882455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-12 11:28:54.882554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-12 11:28:54.882583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-12 11:28:54.882672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-12 11:28:54.882699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-12 11:28:54.882775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-12 11:28:54.882802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-12 11:28:54.882886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-12 11:28:54.882914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-12 11:28:54.882999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-12 11:28:54.883026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-12 11:28:54.883118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-12 11:28:54.883145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-12 11:28:54.883227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-12 11:28:54.883254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-12 11:28:54.883344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-12 11:28:54.883372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-12 11:28:54.883456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-12 11:28:54.883483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-12 11:28:54.883588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-12 11:28:54.883615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-12 11:28:54.883692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-12 11:28:54.883718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-12 11:28:54.883810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-12 11:28:54.883841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-12 11:28:54.883926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-12 11:28:54.883955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-12 11:28:54.884036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-12 11:28:54.884063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-12 11:28:54.884145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-12 11:28:54.884172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-12 11:28:54.884253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-12 11:28:54.884282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-12 11:28:54.884367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-12 11:28:54.884399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-12 11:28:54.884493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-12 11:28:54.884520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-12 11:28:54.884603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-12 11:28:54.884631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-12 11:28:54.884709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.033 [2024-07-12 11:28:54.884738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.033 qpair failed and we were unable to recover it. 00:24:29.033 [2024-07-12 11:28:54.884814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-12 11:28:54.884841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-12 11:28:54.884936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-12 11:28:54.884964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-12 11:28:54.885046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-12 11:28:54.885072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-12 11:28:54.885156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-12 11:28:54.885184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-12 11:28:54.885267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-12 11:28:54.885293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-12 11:28:54.885382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-12 11:28:54.885409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-12 11:28:54.885486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-12 11:28:54.885513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-12 11:28:54.885589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-12 11:28:54.885616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-12 11:28:54.885696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-12 11:28:54.885722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-12 11:28:54.885795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-12 11:28:54.885821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-12 11:28:54.885903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-12 11:28:54.885930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-12 11:28:54.886008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-12 11:28:54.886036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-12 11:28:54.886131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-12 11:28:54.886159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-12 11:28:54.886252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-12 11:28:54.886279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-12 11:28:54.886359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-12 11:28:54.886386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-12 11:28:54.886483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-12 11:28:54.886523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-12 11:28:54.886622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-12 11:28:54.886650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-12 11:28:54.886738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-12 11:28:54.886764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-12 11:28:54.886878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-12 11:28:54.886911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-12 11:28:54.886997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-12 11:28:54.887024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-12 11:28:54.887108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-12 11:28:54.887135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-12 11:28:54.887217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-12 11:28:54.887244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-12 11:28:54.887323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-12 11:28:54.887349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-12 11:28:54.887436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-12 11:28:54.887462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-12 11:28:54.887544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-12 11:28:54.887573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-12 11:28:54.887663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-12 11:28:54.887691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-12 11:28:54.887779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-12 11:28:54.887806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-12 11:28:54.887899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-12 11:28:54.887927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-12 11:28:54.888020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-12 11:28:54.888047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-12 11:28:54.888158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-12 11:28:54.888185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-12 11:28:54.888271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-12 11:28:54.888297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-12 11:28:54.888378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-12 11:28:54.888405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-12 11:28:54.888501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-12 11:28:54.888528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-12 11:28:54.888609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-12 11:28:54.888637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-12 11:28:54.888716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-12 11:28:54.888742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-12 11:28:54.888819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-12 11:28:54.888845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.034 qpair failed and we were unable to recover it. 00:24:29.034 [2024-07-12 11:28:54.888941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.034 [2024-07-12 11:28:54.888969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-12 11:28:54.889052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-12 11:28:54.889079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-12 11:28:54.889155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-12 11:28:54.889182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-12 11:28:54.889259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-12 11:28:54.889285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-12 11:28:54.889361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-12 11:28:54.889389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-12 11:28:54.889478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-12 11:28:54.889507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-12 11:28:54.889598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-12 11:28:54.889626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-12 11:28:54.889712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-12 11:28:54.889742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-12 11:28:54.889831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-12 11:28:54.889859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-12 11:28:54.889958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-12 11:28:54.889987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-12 11:28:54.890068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-12 11:28:54.890094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-12 11:28:54.890170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-12 11:28:54.890196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-12 11:28:54.890277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-12 11:28:54.890304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-12 11:28:54.890386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-12 11:28:54.890414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-12 11:28:54.890489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-12 11:28:54.890516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-12 11:28:54.890608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-12 11:28:54.890634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-12 11:28:54.890711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-12 11:28:54.890738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-12 11:28:54.890813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-12 11:28:54.890839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-12 11:28:54.890931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-12 11:28:54.890957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-12 11:28:54.891044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-12 11:28:54.891070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-12 11:28:54.891153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-12 11:28:54.891180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-12 11:28:54.891257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-12 11:28:54.891283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-12 11:28:54.891363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-12 11:28:54.891391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-12 11:28:54.891472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-12 11:28:54.891499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-12 11:28:54.891579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-12 11:28:54.891608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-12 11:28:54.891695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-12 11:28:54.891723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-12 11:28:54.891806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-12 11:28:54.891834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-12 11:28:54.891920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-12 11:28:54.891947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-12 11:28:54.892022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-12 11:28:54.892049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-12 11:28:54.892164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-12 11:28:54.892191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-12 11:28:54.892282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-12 11:28:54.892308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-12 11:28:54.892501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-12 11:28:54.892527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-12 11:28:54.892599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-12 11:28:54.892625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-12 11:28:54.892701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-12 11:28:54.892727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-12 11:28:54.892808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-12 11:28:54.892837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-12 11:28:54.892921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-12 11:28:54.892949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-12 11:28:54.893033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-12 11:28:54.893061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-12 11:28:54.893141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-12 11:28:54.893167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-12 11:28:54.893244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-12 11:28:54.893270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-12 11:28:54.893346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-12 11:28:54.893372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-12 11:28:54.893452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.035 [2024-07-12 11:28:54.893478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.035 qpair failed and we were unable to recover it. 00:24:29.035 [2024-07-12 11:28:54.893557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-12 11:28:54.893584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-12 11:28:54.893666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-12 11:28:54.893694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-12 11:28:54.893774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-12 11:28:54.893801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-12 11:28:54.893945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-12 11:28:54.893972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-12 11:28:54.894051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-12 11:28:54.894078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-12 11:28:54.894164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-12 11:28:54.894190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-12 11:28:54.894298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-12 11:28:54.894324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-12 11:28:54.894424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-12 11:28:54.894451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-12 11:28:54.894543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-12 11:28:54.894571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-12 11:28:54.894658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-12 11:28:54.894684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-12 11:28:54.894774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-12 11:28:54.894800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-12 11:28:54.894880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-12 11:28:54.894907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-12 11:28:54.894991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-12 11:28:54.895017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-12 11:28:54.895098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-12 11:28:54.895124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-12 11:28:54.895205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-12 11:28:54.895231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-12 11:28:54.895342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-12 11:28:54.895367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-12 11:28:54.895472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-12 11:28:54.895498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-12 11:28:54.895598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-12 11:28:54.895638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-12 11:28:54.895731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-12 11:28:54.895765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-12 11:28:54.895855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-12 11:28:54.895891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-12 11:28:54.895975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-12 11:28:54.896002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-12 11:28:54.896091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-12 11:28:54.896117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-12 11:28:54.896202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-12 11:28:54.896228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-12 11:28:54.896320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-12 11:28:54.896346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-12 11:28:54.896434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-12 11:28:54.896463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-12 11:28:54.896546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-12 11:28:54.896576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-12 11:28:54.896657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-12 11:28:54.896687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-12 11:28:54.896765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-12 11:28:54.896792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-12 11:28:54.896993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-12 11:28:54.897021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-12 11:28:54.897116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-12 11:28:54.897145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-12 11:28:54.897236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-12 11:28:54.897264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-12 11:28:54.897375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-12 11:28:54.897402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-12 11:28:54.897478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-12 11:28:54.897504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-12 11:28:54.897619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-12 11:28:54.897646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-12 11:28:54.897731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-12 11:28:54.897758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-12 11:28:54.897836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-12 11:28:54.897872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-12 11:28:54.897959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-12 11:28:54.897986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-12 11:28:54.898067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-12 11:28:54.898094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-12 11:28:54.898181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-12 11:28:54.898209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-12 11:28:54.898290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-12 11:28:54.898318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-12 11:28:54.898402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.036 [2024-07-12 11:28:54.898429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.036 qpair failed and we were unable to recover it. 00:24:29.036 [2024-07-12 11:28:54.898508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-12 11:28:54.898535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-12 11:28:54.898611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-12 11:28:54.898637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-12 11:28:54.898716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-12 11:28:54.898742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-12 11:28:54.898826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-12 11:28:54.898852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-12 11:28:54.898944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-12 11:28:54.898970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-12 11:28:54.899069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-12 11:28:54.899095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-12 11:28:54.899178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-12 11:28:54.899204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-12 11:28:54.899316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-12 11:28:54.899343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-12 11:28:54.899427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-12 11:28:54.899454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-12 11:28:54.899527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-12 11:28:54.899554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-12 11:28:54.899762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-12 11:28:54.899802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-12 11:28:54.899895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-12 11:28:54.899923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-12 11:28:54.900017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-12 11:28:54.900043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-12 11:28:54.900142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-12 11:28:54.900168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-12 11:28:54.900247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-12 11:28:54.900273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-12 11:28:54.900358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-12 11:28:54.900387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-12 11:28:54.900472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-12 11:28:54.900500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-12 11:28:54.900584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-12 11:28:54.900611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-12 11:28:54.900693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-12 11:28:54.900720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-12 11:28:54.900798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-12 11:28:54.900826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-12 11:28:54.900919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-12 11:28:54.900949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-12 11:28:54.901034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-12 11:28:54.901062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-12 11:28:54.901193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-12 11:28:54.901219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-12 11:28:54.901304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-12 11:28:54.901330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-12 11:28:54.901405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-12 11:28:54.901431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-12 11:28:54.901519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-12 11:28:54.901544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-12 11:28:54.901620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-12 11:28:54.901647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-12 11:28:54.901739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-12 11:28:54.901765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-12 11:28:54.901850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-12 11:28:54.901881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-12 11:28:54.901966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-12 11:28:54.901992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-12 11:28:54.902078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-12 11:28:54.902105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-12 11:28:54.902190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-12 11:28:54.902217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-12 11:28:54.902295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-12 11:28:54.902322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-12 11:28:54.902412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-12 11:28:54.902439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-12 11:28:54.902521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-12 11:28:54.902549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-12 11:28:54.902632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-12 11:28:54.902658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-12 11:28:54.902746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-12 11:28:54.902776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-12 11:28:54.902860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-12 11:28:54.902900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-12 11:28:54.902982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-12 11:28:54.903009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-12 11:28:54.903084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-12 11:28:54.903109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-12 11:28:54.903192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.037 [2024-07-12 11:28:54.903219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.037 qpair failed and we were unable to recover it. 00:24:29.037 [2024-07-12 11:28:54.903302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-12 11:28:54.903331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-12 11:28:54.903411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-12 11:28:54.903439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-12 11:28:54.903527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-12 11:28:54.903556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-12 11:28:54.903639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-12 11:28:54.903668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-12 11:28:54.903750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-12 11:28:54.903777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-12 11:28:54.903862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-12 11:28:54.903898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-12 11:28:54.903989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-12 11:28:54.904016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-12 11:28:54.904106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-12 11:28:54.904133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-12 11:28:54.904226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-12 11:28:54.904253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-12 11:28:54.904349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-12 11:28:54.904377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-12 11:28:54.904462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-12 11:28:54.904491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-12 11:28:54.904574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-12 11:28:54.904601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-12 11:28:54.904689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-12 11:28:54.904716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-12 11:28:54.904803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-12 11:28:54.904831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-12 11:28:54.904931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-12 11:28:54.904958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-12 11:28:54.905043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-12 11:28:54.905072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-12 11:28:54.905168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-12 11:28:54.905196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-12 11:28:54.905274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-12 11:28:54.905303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-12 11:28:54.905389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-12 11:28:54.905416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-12 11:28:54.905507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-12 11:28:54.905536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-12 11:28:54.905651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-12 11:28:54.905684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-12 11:28:54.905768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-12 11:28:54.905796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-12 11:28:54.905880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-12 11:28:54.905907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-12 11:28:54.905987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-12 11:28:54.906013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-12 11:28:54.906089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-12 11:28:54.906115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-12 11:28:54.906197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-12 11:28:54.906223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-12 11:28:54.906307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-12 11:28:54.906333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-12 11:28:54.906417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-12 11:28:54.906445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-12 11:28:54.906531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-12 11:28:54.906560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-12 11:28:54.906646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-12 11:28:54.906674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-12 11:28:54.906791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-12 11:28:54.906817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-12 11:28:54.906902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-12 11:28:54.906930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-12 11:28:54.907013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-12 11:28:54.907038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-12 11:28:54.907121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-12 11:28:54.907148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-12 11:28:54.907237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-12 11:28:54.907262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-12 11:28:54.907338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-12 11:28:54.907363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-12 11:28:54.907444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-12 11:28:54.907470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-12 11:28:54.907544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-12 11:28:54.907569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-12 11:28:54.907651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.038 [2024-07-12 11:28:54.907680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.038 qpair failed and we were unable to recover it. 00:24:29.038 [2024-07-12 11:28:54.907765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-12 11:28:54.907792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-12 11:28:54.907916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-12 11:28:54.907945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-12 11:28:54.908044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-12 11:28:54.908072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-12 11:28:54.908157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-12 11:28:54.908184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-12 11:28:54.908269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-12 11:28:54.908296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-12 11:28:54.908379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-12 11:28:54.908406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-12 11:28:54.908488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-12 11:28:54.908516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-12 11:28:54.908595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-12 11:28:54.908622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-12 11:28:54.908709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-12 11:28:54.908741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-12 11:28:54.908822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-12 11:28:54.908848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-12 11:28:54.908943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-12 11:28:54.908970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-12 11:28:54.909061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-12 11:28:54.909087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-12 11:28:54.909168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-12 11:28:54.909194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-12 11:28:54.909277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-12 11:28:54.909306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-12 11:28:54.909415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-12 11:28:54.909442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-12 11:28:54.909557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-12 11:28:54.909586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-12 11:28:54.909672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-12 11:28:54.909699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-12 11:28:54.909778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-12 11:28:54.909804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-12 11:28:54.909904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-12 11:28:54.909931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-12 11:28:54.910014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-12 11:28:54.910041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-12 11:28:54.910122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-12 11:28:54.910150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-12 11:28:54.910235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-12 11:28:54.910262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-12 11:28:54.910405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-12 11:28:54.910431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-12 11:28:54.910520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-12 11:28:54.910549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-12 11:28:54.910631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-12 11:28:54.910659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-12 11:28:54.910733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-12 11:28:54.910759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-12 11:28:54.910846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-12 11:28:54.910880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-12 11:28:54.910963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-12 11:28:54.910990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-12 11:28:54.911103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-12 11:28:54.911130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-12 11:28:54.911207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-12 11:28:54.911233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-12 11:28:54.911318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-12 11:28:54.911344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-12 11:28:54.911429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-12 11:28:54.911458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-12 11:28:54.911550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-12 11:28:54.911578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-12 11:28:54.911695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-12 11:28:54.911721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-12 11:28:54.911804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-12 11:28:54.911831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-12 11:28:54.911931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-12 11:28:54.911971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-12 11:28:54.912071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-12 11:28:54.912100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.039 [2024-07-12 11:28:54.912189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.039 [2024-07-12 11:28:54.912216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.039 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-12 11:28:54.912293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-12 11:28:54.912319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-12 11:28:54.912440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-12 11:28:54.912481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-12 11:28:54.912568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-12 11:28:54.912596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-12 11:28:54.912676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-12 11:28:54.912703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-12 11:28:54.912783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-12 11:28:54.912809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-12 11:28:54.912903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-12 11:28:54.912930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-12 11:28:54.913009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-12 11:28:54.913035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-12 11:28:54.913121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-12 11:28:54.913147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-12 11:28:54.913229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-12 11:28:54.913257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-12 11:28:54.913339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-12 11:28:54.913366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-12 11:28:54.913478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-12 11:28:54.913511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-12 11:28:54.913589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-12 11:28:54.913616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-12 11:28:54.913699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-12 11:28:54.913728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-12 11:28:54.913809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-12 11:28:54.913836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-12 11:28:54.913932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-12 11:28:54.913959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-12 11:28:54.914050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-12 11:28:54.914076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-12 11:28:54.914186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-12 11:28:54.914213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-12 11:28:54.914299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-12 11:28:54.914327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-12 11:28:54.914413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-12 11:28:54.914440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-12 11:28:54.914520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-12 11:28:54.914546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-12 11:28:54.914636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-12 11:28:54.914667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-12 11:28:54.914781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-12 11:28:54.914808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-12 11:28:54.914897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-12 11:28:54.914925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-12 11:28:54.915033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-12 11:28:54.915060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-12 11:28:54.915159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-12 11:28:54.915186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-12 11:28:54.915274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-12 11:28:54.915305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-12 11:28:54.915449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-12 11:28:54.915477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-12 11:28:54.915555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-12 11:28:54.915581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-12 11:28:54.915697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-12 11:28:54.915724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-12 11:28:54.915811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-12 11:28:54.915838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-12 11:28:54.915926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-12 11:28:54.915953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-12 11:28:54.916030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-12 11:28:54.916056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-12 11:28:54.916132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-12 11:28:54.916158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-12 11:28:54.916268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-12 11:28:54.916294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-12 11:28:54.916376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-12 11:28:54.916403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-12 11:28:54.916513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-12 11:28:54.916540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-12 11:28:54.916626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-12 11:28:54.916652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-12 11:28:54.916743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-12 11:28:54.916788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-12 11:28:54.916879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-12 11:28:54.916908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-12 11:28:54.917027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-12 11:28:54.917056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-12 11:28:54.917150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.040 [2024-07-12 11:28:54.917177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.040 qpair failed and we were unable to recover it. 00:24:29.040 [2024-07-12 11:28:54.917259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-12 11:28:54.917286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-12 11:28:54.917402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-12 11:28:54.917430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-12 11:28:54.917514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-12 11:28:54.917541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-12 11:28:54.917674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-12 11:28:54.917700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-12 11:28:54.917809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-12 11:28:54.917835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-12 11:28:54.917922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-12 11:28:54.917950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-12 11:28:54.918030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-12 11:28:54.918056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-12 11:28:54.918135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-12 11:28:54.918161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-12 11:28:54.918273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-12 11:28:54.918301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-12 11:28:54.918384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-12 11:28:54.918414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-12 11:28:54.918514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-12 11:28:54.918542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-12 11:28:54.918631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-12 11:28:54.918659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-12 11:28:54.918736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-12 11:28:54.918763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-12 11:28:54.918845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-12 11:28:54.918884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-12 11:28:54.918967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-12 11:28:54.918993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-12 11:28:54.919071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-12 11:28:54.919097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-12 11:28:54.919211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-12 11:28:54.919238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-12 11:28:54.919325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-12 11:28:54.919352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-12 11:28:54.919437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-12 11:28:54.919467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-12 11:28:54.919577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-12 11:28:54.919604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-12 11:28:54.919686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-12 11:28:54.919713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-12 11:28:54.919793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-12 11:28:54.919820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-12 11:28:54.919936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-12 11:28:54.919963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-12 11:28:54.920051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-12 11:28:54.920084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-12 11:28:54.920170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-12 11:28:54.920197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-12 11:28:54.920304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-12 11:28:54.920331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-12 11:28:54.920414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-12 11:28:54.920441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-12 11:28:54.920521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-12 11:28:54.920549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-12 11:28:54.920671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-12 11:28:54.920711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-12 11:28:54.920826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-12 11:28:54.920853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-12 11:28:54.920948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-12 11:28:54.920975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-12 11:28:54.921087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-12 11:28:54.921114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-12 11:28:54.921195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-12 11:28:54.921222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-12 11:28:54.921303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-12 11:28:54.921330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-12 11:28:54.921439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-12 11:28:54.921466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-12 11:28:54.921551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-12 11:28:54.921580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-12 11:28:54.921661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-12 11:28:54.921687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-12 11:28:54.921797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-12 11:28:54.921837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-12 11:28:54.921929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-12 11:28:54.921957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-12 11:28:54.922045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-12 11:28:54.922071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.041 [2024-07-12 11:28:54.922158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.041 [2024-07-12 11:28:54.922186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.041 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-12 11:28:54.922300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-12 11:28:54.922326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-12 11:28:54.922410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-12 11:28:54.922439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-12 11:28:54.922519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-12 11:28:54.922548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-12 11:28:54.922632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-12 11:28:54.922661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-12 11:28:54.922775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-12 11:28:54.922802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-12 11:28:54.922917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-12 11:28:54.922944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-12 11:28:54.923030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-12 11:28:54.923056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-12 11:28:54.923143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-12 11:28:54.923170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-12 11:28:54.923247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-12 11:28:54.923273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-12 11:28:54.923356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-12 11:28:54.923384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-12 11:28:54.923498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-12 11:28:54.923526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-12 11:28:54.923645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-12 11:28:54.923674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-12 11:28:54.923748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-12 11:28:54.923774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-12 11:28:54.923860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-12 11:28:54.923891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-12 11:28:54.924003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-12 11:28:54.924029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-12 11:28:54.924104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-12 11:28:54.924131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-12 11:28:54.924213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-12 11:28:54.924239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-12 11:28:54.924325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-12 11:28:54.924352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-12 11:28:54.924461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-12 11:28:54.924489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-12 11:28:54.924604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-12 11:28:54.924631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-12 11:28:54.924705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-12 11:28:54.924732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-12 11:28:54.924814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-12 11:28:54.924840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-12 11:28:54.924930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-12 11:28:54.924963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-12 11:28:54.925042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-12 11:28:54.925068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-12 11:28:54.925181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-12 11:28:54.925209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-12 11:28:54.925312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-12 11:28:54.925339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-12 11:28:54.925417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-12 11:28:54.925443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-12 11:28:54.925524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-12 11:28:54.925550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-12 11:28:54.925659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-12 11:28:54.925687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-12 11:28:54.925810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-12 11:28:54.925850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-12 11:28:54.925947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-12 11:28:54.925975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-12 11:28:54.926055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-12 11:28:54.926081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-12 11:28:54.926154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-12 11:28:54.926180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-12 11:28:54.926268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-12 11:28:54.926294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-12 11:28:54.926381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-12 11:28:54.926410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-12 11:28:54.926527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-12 11:28:54.926555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-12 11:28:54.926657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-12 11:28:54.926696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-12 11:28:54.926785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-12 11:28:54.926812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-12 11:28:54.926917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-12 11:28:54.926945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-12 11:28:54.927028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-12 11:28:54.927054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.042 [2024-07-12 11:28:54.927145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.042 [2024-07-12 11:28:54.927171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.042 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-12 11:28:54.927283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-12 11:28:54.927309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-12 11:28:54.927392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-12 11:28:54.927420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-12 11:28:54.927504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-12 11:28:54.927532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-12 11:28:54.927611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-12 11:28:54.927637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-12 11:28:54.927713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-12 11:28:54.927739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-12 11:28:54.927825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-12 11:28:54.927851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-12 11:28:54.927943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-12 11:28:54.927972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-12 11:28:54.928093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-12 11:28:54.928120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-12 11:28:54.928207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-12 11:28:54.928238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-12 11:28:54.928350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-12 11:28:54.928377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-12 11:28:54.928455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-12 11:28:54.928481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-12 11:28:54.928564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-12 11:28:54.928592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-12 11:28:54.928669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-12 11:28:54.928697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-12 11:28:54.928782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-12 11:28:54.928810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-12 11:28:54.928909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-12 11:28:54.928950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-12 11:28:54.929049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-12 11:28:54.929077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-12 11:28:54.929162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-12 11:28:54.929189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-12 11:28:54.929276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-12 11:28:54.929302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-12 11:28:54.929405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-12 11:28:54.929433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-12 11:28:54.929514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-12 11:28:54.929542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-12 11:28:54.929619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-12 11:28:54.929645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-12 11:28:54.929758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-12 11:28:54.929784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-12 11:28:54.929873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-12 11:28:54.929901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-12 11:28:54.929977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-12 11:28:54.930003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-12 11:28:54.930093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-12 11:28:54.930120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-12 11:28:54.930225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-12 11:28:54.930252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-12 11:28:54.930336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-12 11:28:54.930365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-12 11:28:54.930453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-12 11:28:54.930482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-12 11:28:54.930571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-12 11:28:54.930598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-12 11:28:54.930712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-12 11:28:54.930738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-12 11:28:54.930819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-12 11:28:54.930845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-12 11:28:54.930936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-12 11:28:54.930963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-12 11:28:54.931098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-12 11:28:54.931125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-12 11:28:54.931217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-12 11:28:54.931243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.043 [2024-07-12 11:28:54.931349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.043 [2024-07-12 11:28:54.931375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.043 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-12 11:28:54.931461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-12 11:28:54.931489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-12 11:28:54.931623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-12 11:28:54.931649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-12 11:28:54.931734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-12 11:28:54.931761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-12 11:28:54.931903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-12 11:28:54.931930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-12 11:28:54.932026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-12 11:28:54.932067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-12 11:28:54.932199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-12 11:28:54.932238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-12 11:28:54.932362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-12 11:28:54.932390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-12 11:28:54.932468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-12 11:28:54.932494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-12 11:28:54.932608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-12 11:28:54.932634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-12 11:28:54.932722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-12 11:28:54.932748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-12 11:28:54.932830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-12 11:28:54.932857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-12 11:28:54.932948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-12 11:28:54.932974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-12 11:28:54.933049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-12 11:28:54.933076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-12 11:28:54.933158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-12 11:28:54.933195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-12 11:28:54.933306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-12 11:28:54.933332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-12 11:28:54.933414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-12 11:28:54.933442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-12 11:28:54.933531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-12 11:28:54.933560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-12 11:28:54.933673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-12 11:28:54.933700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-12 11:28:54.933776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-12 11:28:54.933803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-12 11:28:54.933925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-12 11:28:54.933952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-12 11:28:54.934047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-12 11:28:54.934077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-12 11:28:54.934212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-12 11:28:54.934252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-12 11:28:54.934350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-12 11:28:54.934380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-12 11:28:54.934463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-12 11:28:54.934489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-12 11:28:54.934568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-12 11:28:54.934595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-12 11:28:54.934706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-12 11:28:54.934732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-12 11:28:54.934812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-12 11:28:54.934838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-12 11:28:54.934937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-12 11:28:54.934966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-12 11:28:54.935051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-12 11:28:54.935079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-12 11:28:54.935156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-12 11:28:54.935182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-12 11:28:54.935267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-12 11:28:54.935294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-12 11:28:54.935380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-12 11:28:54.935407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-12 11:28:54.935518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-12 11:28:54.935547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-12 11:28:54.935635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-12 11:28:54.935662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-12 11:28:54.935776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-12 11:28:54.935802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-12 11:28:54.935887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-12 11:28:54.935915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-12 11:28:54.936031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-12 11:28:54.936059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-12 11:28:54.936144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-12 11:28:54.936170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-12 11:28:54.936279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-12 11:28:54.936305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-12 11:28:54.936388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-12 11:28:54.936417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.044 [2024-07-12 11:28:54.936564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.044 [2024-07-12 11:28:54.936598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.044 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-12 11:28:54.936710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-12 11:28:54.936736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-12 11:28:54.936816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-12 11:28:54.936843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-12 11:28:54.936993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-12 11:28:54.937020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-12 11:28:54.937105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-12 11:28:54.937134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-12 11:28:54.937253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-12 11:28:54.937280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-12 11:28:54.937386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-12 11:28:54.937412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-12 11:28:54.937492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-12 11:28:54.937518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-12 11:28:54.937617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-12 11:28:54.937657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-12 11:28:54.937742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-12 11:28:54.937770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-12 11:28:54.937899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-12 11:28:54.937927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-12 11:28:54.938014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-12 11:28:54.938040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-12 11:28:54.938118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-12 11:28:54.938145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-12 11:28:54.938251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-12 11:28:54.938278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-12 11:28:54.938399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-12 11:28:54.938426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-12 11:28:54.938519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-12 11:28:54.938548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-12 11:28:54.938633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-12 11:28:54.938662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-12 11:28:54.938748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-12 11:28:54.938775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-12 11:28:54.938850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-12 11:28:54.938888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-12 11:28:54.938971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-12 11:28:54.938997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-12 11:28:54.939073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-12 11:28:54.939099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-12 11:28:54.939214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-12 11:28:54.939240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-12 11:28:54.939324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-12 11:28:54.939353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-12 11:28:54.939470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-12 11:28:54.939498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-12 11:28:54.939617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-12 11:28:54.939645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-12 11:28:54.939734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-12 11:28:54.939762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-12 11:28:54.939839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-12 11:28:54.939873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-12 11:28:54.939974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-12 11:28:54.940003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-12 11:28:54.940120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-12 11:28:54.940148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-12 11:28:54.940231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-12 11:28:54.940258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-12 11:28:54.940397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-12 11:28:54.940423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-12 11:28:54.940534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-12 11:28:54.940561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-12 11:28:54.940659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-12 11:28:54.940699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-12 11:28:54.940784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-12 11:28:54.940813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-12 11:28:54.940906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-12 11:28:54.940935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-12 11:28:54.941078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-12 11:28:54.941104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-12 11:28:54.941188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-12 11:28:54.941215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-12 11:28:54.941293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-12 11:28:54.941319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-12 11:28:54.941428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-12 11:28:54.941454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-12 11:28:54.941544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-12 11:28:54.941571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-12 11:28:54.941657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-12 11:28:54.941690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.045 qpair failed and we were unable to recover it. 00:24:29.045 [2024-07-12 11:28:54.941805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.045 [2024-07-12 11:28:54.941834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-12 11:28:54.941935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-12 11:28:54.941964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-12 11:28:54.942061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-12 11:28:54.942088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-12 11:28:54.942198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-12 11:28:54.942224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-12 11:28:54.942307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-12 11:28:54.942332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-12 11:28:54.942413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-12 11:28:54.942441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-12 11:28:54.942525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-12 11:28:54.942553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-12 11:28:54.942649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-12 11:28:54.942689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-12 11:28:54.942775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-12 11:28:54.942802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-12 11:28:54.942885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-12 11:28:54.942912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-12 11:28:54.943024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-12 11:28:54.943050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-12 11:28:54.943135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-12 11:28:54.943161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-12 11:28:54.943241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-12 11:28:54.943267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-12 11:28:54.943382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-12 11:28:54.943408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-12 11:28:54.943484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-12 11:28:54.943509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-12 11:28:54.943600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-12 11:28:54.943641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-12 11:28:54.943736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-12 11:28:54.943766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-12 11:28:54.943892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-12 11:28:54.943923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-12 11:28:54.944006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-12 11:28:54.944034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-12 11:28:54.944188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-12 11:28:54.944214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-12 11:28:54.944298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-12 11:28:54.944325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-12 11:28:54.944410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-12 11:28:54.944437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-12 11:28:54.944544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-12 11:28:54.944571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-12 11:28:54.944668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-12 11:28:54.944708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-12 11:28:54.944799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-12 11:28:54.944827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-12 11:28:54.944948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-12 11:28:54.944975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-12 11:28:54.945066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-12 11:28:54.945096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-12 11:28:54.945185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-12 11:28:54.945211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-12 11:28:54.945298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-12 11:28:54.945324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-12 11:28:54.945442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-12 11:28:54.945468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-12 11:28:54.945583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-12 11:28:54.945609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-12 11:28:54.945688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-12 11:28:54.945714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-12 11:28:54.945793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-12 11:28:54.945819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-12 11:28:54.945914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-12 11:28:54.945945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-12 11:28:54.946068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-12 11:28:54.946107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-12 11:28:54.946205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-12 11:28:54.946232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-12 11:28:54.946340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-12 11:28:54.946366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-12 11:28:54.946481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-12 11:28:54.946507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-12 11:28:54.946618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-12 11:28:54.946643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-12 11:28:54.946736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-12 11:28:54.946763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-12 11:28:54.946844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-12 11:28:54.946874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.046 qpair failed and we were unable to recover it. 00:24:29.046 [2024-07-12 11:28:54.946958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.046 [2024-07-12 11:28:54.946984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-12 11:28:54.947059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-12 11:28:54.947084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-12 11:28:54.947161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-12 11:28:54.947186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-12 11:28:54.947297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-12 11:28:54.947323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-12 11:28:54.947434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-12 11:28:54.947460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-12 11:28:54.947539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-12 11:28:54.947565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-12 11:28:54.947650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-12 11:28:54.947675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-12 11:28:54.947766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-12 11:28:54.947796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-12 11:28:54.947892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-12 11:28:54.947922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-12 11:28:54.948005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-12 11:28:54.948032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-12 11:28:54.948143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-12 11:28:54.948170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-12 11:28:54.948258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-12 11:28:54.948285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-12 11:28:54.948373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-12 11:28:54.948406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-12 11:28:54.948490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-12 11:28:54.948517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-12 11:28:54.948638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-12 11:28:54.948666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-12 11:28:54.948780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-12 11:28:54.948807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-12 11:28:54.948900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-12 11:28:54.948928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-12 11:28:54.949014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-12 11:28:54.949040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-12 11:28:54.949150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-12 11:28:54.949175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-12 11:28:54.949251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-12 11:28:54.949277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-12 11:28:54.949357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-12 11:28:54.949383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-12 11:28:54.949474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-12 11:28:54.949515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-12 11:28:54.949615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-12 11:28:54.949643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-12 11:28:54.949755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-12 11:28:54.949782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-12 11:28:54.949859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-12 11:28:54.949894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-12 11:28:54.949974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-12 11:28:54.950001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-12 11:28:54.950118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-12 11:28:54.950146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-12 11:28:54.950231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-12 11:28:54.950258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-12 11:28:54.950334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-12 11:28:54.950360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-12 11:28:54.950435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-12 11:28:54.950460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-12 11:28:54.950543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-12 11:28:54.950568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-12 11:28:54.950647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-12 11:28:54.950673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-12 11:28:54.950752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-12 11:28:54.950778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-12 11:28:54.950858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-12 11:28:54.950888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-12 11:28:54.950986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-12 11:28:54.951012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-12 11:28:54.951120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.047 [2024-07-12 11:28:54.951146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.047 qpair failed and we were unable to recover it. 00:24:29.047 [2024-07-12 11:28:54.951228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-12 11:28:54.951255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-12 11:28:54.951373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-12 11:28:54.951400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-12 11:28:54.951515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-12 11:28:54.951544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-12 11:28:54.951635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-12 11:28:54.951668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-12 11:28:54.951751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-12 11:28:54.951778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-12 11:28:54.951857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-12 11:28:54.951897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-12 11:28:54.951985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-12 11:28:54.952012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-12 11:28:54.952094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-12 11:28:54.952121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-12 11:28:54.952209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-12 11:28:54.952236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-12 11:28:54.952322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-12 11:28:54.952349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-12 11:28:54.952459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-12 11:28:54.952485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-12 11:28:54.952565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-12 11:28:54.952591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-12 11:28:54.952671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-12 11:28:54.952697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-12 11:28:54.952778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-12 11:28:54.952804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-12 11:28:54.952886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-12 11:28:54.952912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-12 11:28:54.952987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-12 11:28:54.953013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-12 11:28:54.953090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-12 11:28:54.953116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-12 11:28:54.953194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-12 11:28:54.953221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-12 11:28:54.953330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-12 11:28:54.953356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-12 11:28:54.953439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-12 11:28:54.953465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-12 11:28:54.953576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-12 11:28:54.953606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-12 11:28:54.953706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-12 11:28:54.953746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-12 11:28:54.953839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-12 11:28:54.953875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-12 11:28:54.953990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-12 11:28:54.954017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-12 11:28:54.954140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-12 11:28:54.954167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-12 11:28:54.954253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-12 11:28:54.954279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-12 11:28:54.954362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-12 11:28:54.954390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-12 11:28:54.954499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-12 11:28:54.954525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-12 11:28:54.954617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-12 11:28:54.954657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.048 qpair failed and we were unable to recover it. 00:24:29.048 [2024-07-12 11:28:54.954739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.048 [2024-07-12 11:28:54.954767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-12 11:28:54.954877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-12 11:28:54.954909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-12 11:28:54.954992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-12 11:28:54.955019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-12 11:28:54.955110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-12 11:28:54.955137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-12 11:28:54.955216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-12 11:28:54.955242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-12 11:28:54.955321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-12 11:28:54.955349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-12 11:28:54.955432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-12 11:28:54.955460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-12 11:28:54.955550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-12 11:28:54.955590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-12 11:28:54.955715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-12 11:28:54.955742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-12 11:28:54.955816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-12 11:28:54.955842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-12 11:28:54.955926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-12 11:28:54.955953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-12 11:28:54.956037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-12 11:28:54.956063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-12 11:28:54.956209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-12 11:28:54.956239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-12 11:28:54.956354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-12 11:28:54.956381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-12 11:28:54.956469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-12 11:28:54.956495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-12 11:28:54.956594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-12 11:28:54.956620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-12 11:28:54.956781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-12 11:28:54.956821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-12 11:28:54.956923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-12 11:28:54.956954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-12 11:28:54.957072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-12 11:28:54.957099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-12 11:28:54.957191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-12 11:28:54.957217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-12 11:28:54.957295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-12 11:28:54.957322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-12 11:28:54.957440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-12 11:28:54.957468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-12 11:28:54.957574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-12 11:28:54.957602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-12 11:28:54.957697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-12 11:28:54.957737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-12 11:28:54.957830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-12 11:28:54.957858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-12 11:28:54.957949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-12 11:28:54.957976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-12 11:28:54.958088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-12 11:28:54.958115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-12 11:28:54.958226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-12 11:28:54.958252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-12 11:28:54.958336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-12 11:28:54.958364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-12 11:28:54.958451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-12 11:28:54.958480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-12 11:28:54.958585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-12 11:28:54.958625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-12 11:28:54.958745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-12 11:28:54.958773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-12 11:28:54.958847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-12 11:28:54.958880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-12 11:28:54.958973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-12 11:28:54.958998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-12 11:28:54.959075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-12 11:28:54.959101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-12 11:28:54.959178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-12 11:28:54.959203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-12 11:28:54.959287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-12 11:28:54.959313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-12 11:28:54.959388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-12 11:28:54.959414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-12 11:28:54.959514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-12 11:28:54.959555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-12 11:28:54.959687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-12 11:28:54.959727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-12 11:28:54.959810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-12 11:28:54.959839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.049 qpair failed and we were unable to recover it. 00:24:29.049 [2024-07-12 11:28:54.959929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.049 [2024-07-12 11:28:54.959957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-12 11:28:54.960055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-12 11:28:54.960082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-12 11:28:54.960200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-12 11:28:54.960228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-12 11:28:54.960307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-12 11:28:54.960333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-12 11:28:54.960439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-12 11:28:54.960466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-12 11:28:54.960578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-12 11:28:54.960607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-12 11:28:54.960724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-12 11:28:54.960753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-12 11:28:54.960844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-12 11:28:54.960882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-12 11:28:54.961000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-12 11:28:54.961027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-12 11:28:54.961113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-12 11:28:54.961139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-12 11:28:54.961282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-12 11:28:54.961309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-12 11:28:54.961394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-12 11:28:54.961422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-12 11:28:54.961539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-12 11:28:54.961566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-12 11:28:54.961678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-12 11:28:54.961707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-12 11:28:54.961802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-12 11:28:54.961831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-12 11:28:54.961935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-12 11:28:54.961975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-12 11:28:54.962068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-12 11:28:54.962097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-12 11:28:54.962185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-12 11:28:54.962213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-12 11:28:54.962322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-12 11:28:54.962348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-12 11:28:54.962457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-12 11:28:54.962482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-12 11:28:54.962575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-12 11:28:54.962603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-12 11:28:54.962685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-12 11:28:54.962714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-12 11:28:54.962790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-12 11:28:54.962816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-12 11:28:54.962900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-12 11:28:54.962928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-12 11:28:54.963036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-12 11:28:54.963063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-12 11:28:54.963154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-12 11:28:54.963181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-12 11:28:54.963297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-12 11:28:54.963323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-12 11:28:54.963402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-12 11:28:54.963436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-12 11:28:54.963522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-12 11:28:54.963550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-12 11:28:54.963669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-12 11:28:54.963697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-12 11:28:54.963819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-12 11:28:54.963847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-12 11:28:54.963946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-12 11:28:54.963975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-12 11:28:54.964059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-12 11:28:54.964086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-12 11:28:54.964203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-12 11:28:54.964230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-12 11:28:54.964313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-12 11:28:54.964339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-12 11:28:54.964414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-12 11:28:54.964441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-12 11:28:54.964524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-12 11:28:54.964551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-12 11:28:54.964659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-12 11:28:54.964686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-12 11:28:54.964802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-12 11:28:54.964828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-12 11:28:54.964919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-12 11:28:54.964948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-12 11:28:54.965034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.050 [2024-07-12 11:28:54.965063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.050 qpair failed and we were unable to recover it. 00:24:29.050 [2024-07-12 11:28:54.965150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-12 11:28:54.965178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-12 11:28:54.965260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-12 11:28:54.965286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-12 11:28:54.965366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-12 11:28:54.965394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-12 11:28:54.965505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-12 11:28:54.965531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-12 11:28:54.965611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-12 11:28:54.965637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-12 11:28:54.965722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-12 11:28:54.965749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-12 11:28:54.965825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-12 11:28:54.965853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-12 11:28:54.965942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-12 11:28:54.965970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-12 11:28:54.966081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-12 11:28:54.966108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-12 11:28:54.966191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-12 11:28:54.966217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-12 11:28:54.966296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-12 11:28:54.966322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-12 11:28:54.966430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-12 11:28:54.966456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-12 11:28:54.966546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-12 11:28:54.966573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-12 11:28:54.966653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-12 11:28:54.966680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-12 11:28:54.966790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-12 11:28:54.966816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-12 11:28:54.966928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-12 11:28:54.966955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-12 11:28:54.967038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-12 11:28:54.967064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-12 11:28:54.967172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-12 11:28:54.967198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-12 11:28:54.967280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-12 11:28:54.967306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-12 11:28:54.967384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-12 11:28:54.967410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-12 11:28:54.967485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-12 11:28:54.967511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-12 11:28:54.967599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-12 11:28:54.967628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-12 11:28:54.967743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-12 11:28:54.967771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-12 11:28:54.967881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-12 11:28:54.967921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-12 11:28:54.968009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-12 11:28:54.968036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-12 11:28:54.968124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-12 11:28:54.968149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-12 11:28:54.968230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-12 11:28:54.968256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-12 11:28:54.968351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-12 11:28:54.968378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-12 11:28:54.968470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-12 11:28:54.968498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-12 11:28:54.968618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-12 11:28:54.968647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-12 11:28:54.968792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-12 11:28:54.968818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-12 11:28:54.968904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-12 11:28:54.968931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-12 11:28:54.969038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-12 11:28:54.969065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-12 11:28:54.969145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-12 11:28:54.969172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-12 11:28:54.969248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-12 11:28:54.969274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-12 11:28:54.969361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-12 11:28:54.969389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-12 11:28:54.969511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-12 11:28:54.969539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-12 11:28:54.969637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-12 11:28:54.969665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-12 11:28:54.969750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-12 11:28:54.969776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-12 11:28:54.969858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-12 11:28:54.969890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.051 [2024-07-12 11:28:54.970008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.051 [2024-07-12 11:28:54.970034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.051 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-12 11:28:54.970114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-12 11:28:54.970140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-12 11:28:54.970248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-12 11:28:54.970275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-12 11:28:54.970388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-12 11:28:54.970415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-12 11:28:54.970495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-12 11:28:54.970523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-12 11:28:54.970603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-12 11:28:54.970630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-12 11:28:54.970709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-12 11:28:54.970736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-12 11:28:54.970854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-12 11:28:54.970885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-12 11:28:54.971004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-12 11:28:54.971031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-12 11:28:54.971113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-12 11:28:54.971139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-12 11:28:54.971255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-12 11:28:54.971282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-12 11:28:54.971397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-12 11:28:54.971427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-12 11:28:54.971524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-12 11:28:54.971564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-12 11:28:54.971681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-12 11:28:54.971713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-12 11:28:54.971797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-12 11:28:54.971823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-12 11:28:54.971939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-12 11:28:54.971966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-12 11:28:54.972078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-12 11:28:54.972105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-12 11:28:54.972224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-12 11:28:54.972250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-12 11:28:54.972358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-12 11:28:54.972384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-12 11:28:54.972499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-12 11:28:54.972525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-12 11:28:54.972608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-12 11:28:54.972636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-12 11:28:54.972747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-12 11:28:54.972774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-12 11:28:54.972853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-12 11:28:54.972892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-12 11:28:54.972976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-12 11:28:54.973002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-12 11:28:54.973074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-12 11:28:54.973100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-12 11:28:54.973212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-12 11:28:54.973239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-12 11:28:54.973322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-12 11:28:54.973350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-12 11:28:54.973483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-12 11:28:54.973523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-12 11:28:54.973653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-12 11:28:54.973693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-12 11:28:54.973786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-12 11:28:54.973813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-12 11:28:54.973903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-12 11:28:54.973930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-12 11:28:54.974007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-12 11:28:54.974034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-12 11:28:54.974115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-12 11:28:54.974142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-12 11:28:54.974229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-12 11:28:54.974264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.052 qpair failed and we were unable to recover it. 00:24:29.052 [2024-07-12 11:28:54.974374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.052 [2024-07-12 11:28:54.974400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-12 11:28:54.974480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-12 11:28:54.974506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-12 11:28:54.974588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-12 11:28:54.974614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-12 11:28:54.974730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-12 11:28:54.974760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-12 11:28:54.974839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-12 11:28:54.974874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-12 11:28:54.974965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-12 11:28:54.974994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-12 11:28:54.975082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-12 11:28:54.975113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-12 11:28:54.975197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-12 11:28:54.975224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-12 11:28:54.975322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-12 11:28:54.975349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-12 11:28:54.975427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-12 11:28:54.975455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-12 11:28:54.975538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-12 11:28:54.975566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-12 11:28:54.975692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-12 11:28:54.975733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-12 11:28:54.975822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-12 11:28:54.975849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-12 11:28:54.975943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-12 11:28:54.975970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-12 11:28:54.976075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-12 11:28:54.976102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-12 11:28:54.976207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-12 11:28:54.976233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-12 11:28:54.976315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-12 11:28:54.976343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-12 11:28:54.976426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-12 11:28:54.976453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-12 11:28:54.976535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-12 11:28:54.976561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-12 11:28:54.976632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-12 11:28:54.976658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-12 11:28:54.976740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-12 11:28:54.976768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-12 11:28:54.976919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-12 11:28:54.976947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-12 11:28:54.977031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-12 11:28:54.977058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-12 11:28:54.977166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-12 11:28:54.977193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-12 11:28:54.977284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-12 11:28:54.977311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-12 11:28:54.977395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-12 11:28:54.977422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-12 11:28:54.977501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-12 11:28:54.977528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-12 11:28:54.977613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-12 11:28:54.977640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-12 11:28:54.977749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-12 11:28:54.977776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-12 11:28:54.977885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-12 11:28:54.977913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-12 11:28:54.978007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-12 11:28:54.978046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-12 11:28:54.978130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-12 11:28:54.978158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-12 11:28:54.978284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-12 11:28:54.978311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-12 11:28:54.978395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-12 11:28:54.978421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-12 11:28:54.978504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-12 11:28:54.978530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-12 11:28:54.978614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-12 11:28:54.978643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-12 11:28:54.978716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-12 11:28:54.978743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-12 11:28:54.978852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-12 11:28:54.978884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-12 11:28:54.978967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-12 11:28:54.978994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-12 11:28:54.979142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-12 11:28:54.979168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-12 11:28:54.979247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.053 [2024-07-12 11:28:54.979274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.053 qpair failed and we were unable to recover it. 00:24:29.053 [2024-07-12 11:28:54.979386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-12 11:28:54.979414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-12 11:28:54.979496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-12 11:28:54.979526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-12 11:28:54.979624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-12 11:28:54.979665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-12 11:28:54.979786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-12 11:28:54.979814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-12 11:28:54.979922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-12 11:28:54.979949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-12 11:28:54.980030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-12 11:28:54.980058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-12 11:28:54.980181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-12 11:28:54.980207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-12 11:28:54.980285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-12 11:28:54.980311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-12 11:28:54.980403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-12 11:28:54.980431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-12 11:28:54.980541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-12 11:28:54.980568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-12 11:28:54.980655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-12 11:28:54.980682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-12 11:28:54.980762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-12 11:28:54.980788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-12 11:28:54.980879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-12 11:28:54.980910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-12 11:28:54.981025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-12 11:28:54.981051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-12 11:28:54.981144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-12 11:28:54.981172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-12 11:28:54.981251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-12 11:28:54.981277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-12 11:28:54.981357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-12 11:28:54.981385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-12 11:28:54.981473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-12 11:28:54.981501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-12 11:28:54.981609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-12 11:28:54.981650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-12 11:28:54.981773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-12 11:28:54.981801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-12 11:28:54.981916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-12 11:28:54.981943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-12 11:28:54.982025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-12 11:28:54.982052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-12 11:28:54.982134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-12 11:28:54.982160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-12 11:28:54.982242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-12 11:28:54.982268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-12 11:28:54.982380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-12 11:28:54.982407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-12 11:28:54.982485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-12 11:28:54.982512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-12 11:28:54.982585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-12 11:28:54.982611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-12 11:28:54.982694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-12 11:28:54.982721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-12 11:28:54.982804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-12 11:28:54.982832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-12 11:28:54.982917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-12 11:28:54.982944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-12 11:28:54.983028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-12 11:28:54.983054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-12 11:28:54.983132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-12 11:28:54.983158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-12 11:28:54.983266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-12 11:28:54.983297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-12 11:28:54.983410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-12 11:28:54.983436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-12 11:28:54.983526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-12 11:28:54.983553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-12 11:28:54.983625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-12 11:28:54.983652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-12 11:28:54.983733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-12 11:28:54.983760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-12 11:28:54.983846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-12 11:28:54.983878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-12 11:28:54.983968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-12 11:28:54.983995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-12 11:28:54.984084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-12 11:28:54.984110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-12 11:28:54.984189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.054 [2024-07-12 11:28:54.984218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.054 qpair failed and we were unable to recover it. 00:24:29.054 [2024-07-12 11:28:54.984302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-12 11:28:54.984329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-12 11:28:54.984417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-12 11:28:54.984444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-12 11:28:54.984525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-12 11:28:54.984552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-12 11:28:54.984658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-12 11:28:54.984699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-12 11:28:54.984791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-12 11:28:54.984819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-12 11:28:54.984944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-12 11:28:54.984972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-12 11:28:54.985067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-12 11:28:54.985093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-12 11:28:54.985179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-12 11:28:54.985205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-12 11:28:54.985295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-12 11:28:54.985321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-12 11:28:54.985433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-12 11:28:54.985460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-12 11:28:54.985542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-12 11:28:54.985571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-12 11:28:54.985652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-12 11:28:54.985681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-12 11:28:54.985761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-12 11:28:54.985787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-12 11:28:54.985875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-12 11:28:54.985903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-12 11:28:54.986010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-12 11:28:54.986037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-12 11:28:54.986117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-12 11:28:54.986143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-12 11:28:54.986224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-12 11:28:54.986250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-12 11:28:54.986358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-12 11:28:54.986386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-12 11:28:54.986470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-12 11:28:54.986502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-12 11:28:54.986611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-12 11:28:54.986637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-12 11:28:54.986748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-12 11:28:54.986774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-12 11:28:54.986857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-12 11:28:54.986891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-12 11:28:54.986964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-12 11:28:54.986990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-12 11:28:54.987072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-12 11:28:54.987098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-12 11:28:54.987174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-12 11:28:54.987200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-12 11:28:54.987284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-12 11:28:54.987311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-12 11:28:54.987429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-12 11:28:54.987458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-12 11:28:54.987575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-12 11:28:54.987603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-12 11:28:54.987694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-12 11:28:54.987722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-12 11:28:54.987838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-12 11:28:54.987870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-12 11:28:54.987982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-12 11:28:54.988009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-12 11:28:54.988099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-12 11:28:54.988127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-12 11:28:54.988245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-12 11:28:54.988271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-12 11:28:54.988382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-12 11:28:54.988408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-12 11:28:54.988499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-12 11:28:54.988527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-12 11:28:54.988608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-12 11:28:54.988635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-12 11:28:54.988715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-12 11:28:54.988741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-12 11:28:54.988879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-12 11:28:54.988906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-12 11:28:54.988994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-12 11:28:54.989021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-12 11:28:54.989130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-12 11:28:54.989157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-12 11:28:54.989233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.055 [2024-07-12 11:28:54.989259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.055 qpair failed and we were unable to recover it. 00:24:29.055 [2024-07-12 11:28:54.989338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-12 11:28:54.989364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-12 11:28:54.989445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-12 11:28:54.989471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-12 11:28:54.989561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-12 11:28:54.989601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-12 11:28:54.989715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-12 11:28:54.989742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-12 11:28:54.989837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-12 11:28:54.989888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-12 11:28:54.989985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-12 11:28:54.990013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-12 11:28:54.990096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-12 11:28:54.990123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-12 11:28:54.990207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-12 11:28:54.990234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-12 11:28:54.990346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-12 11:28:54.990374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-12 11:28:54.990489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-12 11:28:54.990519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-12 11:28:54.990618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-12 11:28:54.990646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-12 11:28:54.990730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-12 11:28:54.990756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-12 11:28:54.990829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-12 11:28:54.990855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-12 11:28:54.990973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-12 11:28:54.991000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-12 11:28:54.991085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-12 11:28:54.991112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-12 11:28:54.991195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-12 11:28:54.991221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-12 11:28:54.991302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-12 11:28:54.991329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-12 11:28:54.991410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-12 11:28:54.991437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-12 11:28:54.991556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-12 11:28:54.991582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-12 11:28:54.991664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-12 11:28:54.991690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-12 11:28:54.991804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-12 11:28:54.991833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-12 11:28:54.991922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-12 11:28:54.991951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-12 11:28:54.992067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-12 11:28:54.992094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-12 11:28:54.992203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-12 11:28:54.992230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-12 11:28:54.992317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-12 11:28:54.992345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-12 11:28:54.992440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-12 11:28:54.992481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-12 11:28:54.992569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-12 11:28:54.992596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-12 11:28:54.992681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-12 11:28:54.992710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-12 11:28:54.992829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-12 11:28:54.992856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-12 11:28:54.992942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-12 11:28:54.992968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-12 11:28:54.993047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-12 11:28:54.993074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-12 11:28:54.993225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-12 11:28:54.993251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-12 11:28:54.993335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-12 11:28:54.993361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-12 11:28:54.993450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-12 11:28:54.993477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-12 11:28:54.993595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-12 11:28:54.993623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.056 qpair failed and we were unable to recover it. 00:24:29.056 [2024-07-12 11:28:54.993700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.056 [2024-07-12 11:28:54.993727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-12 11:28:54.993834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-12 11:28:54.993860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-12 11:28:54.993945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-12 11:28:54.993971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-12 11:28:54.994062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-12 11:28:54.994090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-12 11:28:54.994212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-12 11:28:54.994239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-12 11:28:54.994347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-12 11:28:54.994374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-12 11:28:54.994479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-12 11:28:54.994506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-12 11:28:54.994586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-12 11:28:54.994612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-12 11:28:54.994698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-12 11:28:54.994726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-12 11:28:54.994816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-12 11:28:54.994862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-12 11:28:54.994971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-12 11:28:54.994999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-12 11:28:54.995084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-12 11:28:54.995111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-12 11:28:54.995195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-12 11:28:54.995222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-12 11:28:54.995299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-12 11:28:54.995325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-12 11:28:54.995406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-12 11:28:54.995433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-12 11:28:54.995560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-12 11:28:54.995600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-12 11:28:54.995682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-12 11:28:54.995712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-12 11:28:54.995798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-12 11:28:54.995827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-12 11:28:54.995952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-12 11:28:54.995979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-12 11:28:54.996067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-12 11:28:54.996094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-12 11:28:54.996189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-12 11:28:54.996215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-12 11:28:54.996297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-12 11:28:54.996323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-12 11:28:54.996435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-12 11:28:54.996462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-12 11:28:54.996546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-12 11:28:54.996572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-12 11:28:54.996674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-12 11:28:54.996702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-12 11:28:54.996793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-12 11:28:54.996824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-12 11:28:54.996923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-12 11:28:54.996952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-12 11:28:54.997047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-12 11:28:54.997074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-12 11:28:54.997157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-12 11:28:54.997185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-12 11:28:54.997273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-12 11:28:54.997300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-12 11:28:54.997381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-12 11:28:54.997408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-12 11:28:54.997519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-12 11:28:54.997546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-12 11:28:54.997653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-12 11:28:54.997680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-12 11:28:54.997764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-12 11:28:54.997791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-12 11:28:54.997874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-12 11:28:54.997902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-12 11:28:54.998018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-12 11:28:54.998045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-12 11:28:54.998138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-12 11:28:54.998165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-12 11:28:54.998253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-12 11:28:54.998281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-12 11:28:54.998365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-12 11:28:54.998392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-12 11:28:54.998474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-12 11:28:54.998501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-12 11:28:54.998588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.057 [2024-07-12 11:28:54.998615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.057 qpair failed and we were unable to recover it. 00:24:29.057 [2024-07-12 11:28:54.998706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-12 11:28:54.998746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-12 11:28:54.998863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-12 11:28:54.998898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-12 11:28:54.998981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-12 11:28:54.999007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-12 11:28:54.999090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-12 11:28:54.999116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-12 11:28:54.999223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-12 11:28:54.999249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-12 11:28:54.999333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-12 11:28:54.999363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-12 11:28:54.999450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-12 11:28:54.999477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-12 11:28:54.999558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-12 11:28:54.999584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-12 11:28:54.999700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-12 11:28:54.999733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-12 11:28:54.999815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-12 11:28:54.999841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-12 11:28:54.999972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-12 11:28:55.000012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-12 11:28:55.000129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-12 11:28:55.000156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-12 11:28:55.000243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-12 11:28:55.000269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-12 11:28:55.000343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-12 11:28:55.000370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-12 11:28:55.000448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-12 11:28:55.000475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-12 11:28:55.000586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-12 11:28:55.000616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-12 11:28:55.000704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-12 11:28:55.000732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-12 11:28:55.000844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-12 11:28:55.000891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-12 11:28:55.001006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-12 11:28:55.001034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-12 11:28:55.001119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-12 11:28:55.001147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-12 11:28:55.001230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-12 11:28:55.001257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-12 11:28:55.001340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-12 11:28:55.001369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-12 11:28:55.001455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-12 11:28:55.001481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-12 11:28:55.001558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-12 11:28:55.001585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-12 11:28:55.001659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-12 11:28:55.001685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-12 11:28:55.001790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-12 11:28:55.001816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-12 11:28:55.001904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-12 11:28:55.001931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-12 11:28:55.002009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-12 11:28:55.002035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-12 11:28:55.002153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-12 11:28:55.002182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-12 11:28:55.002266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-12 11:28:55.002293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-12 11:28:55.002407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-12 11:28:55.002433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-12 11:28:55.002546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-12 11:28:55.002573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-12 11:28:55.002669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-12 11:28:55.002709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-12 11:28:55.002814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-12 11:28:55.002854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-12 11:28:55.002982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-12 11:28:55.003009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-12 11:28:55.003095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-12 11:28:55.003126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-12 11:28:55.003233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-12 11:28:55.003259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-12 11:28:55.003343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-12 11:28:55.003369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-12 11:28:55.003453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-12 11:28:55.003480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-12 11:28:55.003574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-12 11:28:55.003615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-12 11:28:55.003743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.058 [2024-07-12 11:28:55.003784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.058 qpair failed and we were unable to recover it. 00:24:29.058 [2024-07-12 11:28:55.003880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-12 11:28:55.003909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-12 11:28:55.003994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-12 11:28:55.004022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-12 11:28:55.004133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-12 11:28:55.004159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-12 11:28:55.004271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-12 11:28:55.004298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-12 11:28:55.004390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-12 11:28:55.004418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-12 11:28:55.004527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-12 11:28:55.004554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-12 11:28:55.004648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-12 11:28:55.004688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-12 11:28:55.004819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-12 11:28:55.004847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-12 11:28:55.004952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-12 11:28:55.004982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-12 11:28:55.005100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-12 11:28:55.005127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-12 11:28:55.005266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-12 11:28:55.005292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-12 11:28:55.005410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-12 11:28:55.005437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-12 11:28:55.005521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-12 11:28:55.005548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-12 11:28:55.005659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-12 11:28:55.005688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-12 11:28:55.005793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-12 11:28:55.005822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-12 11:28:55.005922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-12 11:28:55.005949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-12 11:28:55.006036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-12 11:28:55.006062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-12 11:28:55.006173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-12 11:28:55.006199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-12 11:28:55.006278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-12 11:28:55.006304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-12 11:28:55.006384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-12 11:28:55.006412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-12 11:28:55.006502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-12 11:28:55.006531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-12 11:28:55.006620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-12 11:28:55.006661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-12 11:28:55.006775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-12 11:28:55.006803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-12 11:28:55.006911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-12 11:28:55.006938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-12 11:28:55.007032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-12 11:28:55.007059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-12 11:28:55.007173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-12 11:28:55.007199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-12 11:28:55.007282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-12 11:28:55.007308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-12 11:28:55.007391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-12 11:28:55.007420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-12 11:28:55.007523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-12 11:28:55.007552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-12 11:28:55.007679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-12 11:28:55.007719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-12 11:28:55.007811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-12 11:28:55.007839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-12 11:28:55.007941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-12 11:28:55.007969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-12 11:28:55.008046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-12 11:28:55.008072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-12 11:28:55.008192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-12 11:28:55.008220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-12 11:28:55.008301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-12 11:28:55.008327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-12 11:28:55.008445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-12 11:28:55.008471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-12 11:28:55.008547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-12 11:28:55.008573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-12 11:28:55.008656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-12 11:28:55.008682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-12 11:28:55.008766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-12 11:28:55.008795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-12 11:28:55.008901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-12 11:28:55.008928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-12 11:28:55.009008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.059 [2024-07-12 11:28:55.009035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.059 qpair failed and we were unable to recover it. 00:24:29.059 [2024-07-12 11:28:55.009122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-12 11:28:55.009148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-12 11:28:55.009234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-12 11:28:55.009261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-12 11:28:55.009348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-12 11:28:55.009374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-12 11:28:55.009457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-12 11:28:55.009484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-12 11:28:55.009624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-12 11:28:55.009651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-12 11:28:55.009735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-12 11:28:55.009762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-12 11:28:55.009840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-12 11:28:55.009871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-12 11:28:55.009961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-12 11:28:55.010001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-12 11:28:55.010102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-12 11:28:55.010143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-12 11:28:55.010254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-12 11:28:55.010282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-12 11:28:55.010395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-12 11:28:55.010422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-12 11:28:55.010510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-12 11:28:55.010538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-12 11:28:55.010679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-12 11:28:55.010706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-12 11:28:55.010797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-12 11:28:55.010823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-12 11:28:55.010919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-12 11:28:55.010948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-12 11:28:55.011026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-12 11:28:55.011053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-12 11:28:55.011138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-12 11:28:55.011164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-12 11:28:55.011275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-12 11:28:55.011302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-12 11:28:55.011382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-12 11:28:55.011409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-12 11:28:55.011488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-12 11:28:55.011517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-12 11:28:55.011602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-12 11:28:55.011632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-12 11:28:55.011725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-12 11:28:55.011756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-12 11:28:55.011899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-12 11:28:55.011928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-12 11:28:55.012015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-12 11:28:55.012043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-12 11:28:55.012157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-12 11:28:55.012183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-12 11:28:55.012264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-12 11:28:55.012290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-12 11:28:55.012369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-12 11:28:55.012396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-12 11:28:55.012517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-12 11:28:55.012544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-12 11:28:55.012629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-12 11:28:55.012657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-12 11:28:55.012769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-12 11:28:55.012797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-12 11:28:55.012904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-12 11:28:55.012932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-12 11:28:55.013013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-12 11:28:55.013040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-12 11:28:55.013161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-12 11:28:55.013187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-12 11:28:55.013273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-12 11:28:55.013299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-12 11:28:55.013388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-12 11:28:55.013416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.060 [2024-07-12 11:28:55.013514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.060 [2024-07-12 11:28:55.013541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.060 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-12 11:28:55.013628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-12 11:28:55.013656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-12 11:28:55.013743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-12 11:28:55.013784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-12 11:28:55.013907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-12 11:28:55.013935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-12 11:28:55.014015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-12 11:28:55.014041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-12 11:28:55.014119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-12 11:28:55.014145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-12 11:28:55.014256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-12 11:28:55.014282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-12 11:28:55.014360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-12 11:28:55.014389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-12 11:28:55.014482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-12 11:28:55.014510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-12 11:28:55.014591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-12 11:28:55.014621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-12 11:28:55.014705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-12 11:28:55.014731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-12 11:28:55.014842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-12 11:28:55.014876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-12 11:28:55.014996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-12 11:28:55.015027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-12 11:28:55.015111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-12 11:28:55.015137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-12 11:28:55.015256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-12 11:28:55.015283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-12 11:28:55.015360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-12 11:28:55.015386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-12 11:28:55.015507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-12 11:28:55.015533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-12 11:28:55.015609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-12 11:28:55.015635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-12 11:28:55.015712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-12 11:28:55.015738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-12 11:28:55.015887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-12 11:28:55.015913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-12 11:28:55.015995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-12 11:28:55.016021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-12 11:28:55.016101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-12 11:28:55.016128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-12 11:28:55.016205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-12 11:28:55.016231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-12 11:28:55.016307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-12 11:28:55.016334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-12 11:28:55.016436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-12 11:28:55.016463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-12 11:28:55.016558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-12 11:28:55.016599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-12 11:28:55.016701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-12 11:28:55.016742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-12 11:28:55.016901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-12 11:28:55.016941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-12 11:28:55.017036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-12 11:28:55.017064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-12 11:28:55.017142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-12 11:28:55.017169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-12 11:28:55.017281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-12 11:28:55.017307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-12 11:28:55.017448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-12 11:28:55.017476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-12 11:28:55.017560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-12 11:28:55.017590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-12 11:28:55.017713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-12 11:28:55.017744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-12 11:28:55.017829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-12 11:28:55.017857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-12 11:28:55.017954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-12 11:28:55.017981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-12 11:28:55.018076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-12 11:28:55.018102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-12 11:28:55.018197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-12 11:28:55.018225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-12 11:28:55.018303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-12 11:28:55.018329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-12 11:28:55.018441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-12 11:28:55.018472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-12 11:28:55.018550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-12 11:28:55.018577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.061 [2024-07-12 11:28:55.018662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.061 [2024-07-12 11:28:55.018689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.061 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-12 11:28:55.018775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-12 11:28:55.018801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-12 11:28:55.018891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-12 11:28:55.018920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-12 11:28:55.019002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-12 11:28:55.019029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-12 11:28:55.019134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-12 11:28:55.019161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-12 11:28:55.019238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-12 11:28:55.019264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-12 11:28:55.019350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-12 11:28:55.019376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-12 11:28:55.019449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-12 11:28:55.019475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-12 11:28:55.019564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-12 11:28:55.019590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-12 11:28:55.019672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-12 11:28:55.019699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-12 11:28:55.019775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-12 11:28:55.019801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-12 11:28:55.019915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-12 11:28:55.019942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-12 11:28:55.020021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-12 11:28:55.020047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-12 11:28:55.020135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-12 11:28:55.020161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-12 11:28:55.020248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-12 11:28:55.020276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-12 11:28:55.020361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-12 11:28:55.020390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-12 11:28:55.020475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-12 11:28:55.020502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-12 11:28:55.020610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-12 11:28:55.020637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-12 11:28:55.020721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-12 11:28:55.020747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-12 11:28:55.020860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-12 11:28:55.020892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-12 11:28:55.021000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-12 11:28:55.021027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-12 11:28:55.021116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-12 11:28:55.021143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-12 11:28:55.021222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-12 11:28:55.021249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-12 11:28:55.021333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-12 11:28:55.021359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-12 11:28:55.021442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-12 11:28:55.021471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-12 11:28:55.021567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-12 11:28:55.021608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-12 11:28:55.021725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-12 11:28:55.021753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-12 11:28:55.021858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-12 11:28:55.021890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-12 11:28:55.022010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-12 11:28:55.022036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-12 11:28:55.022115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-12 11:28:55.022142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-12 11:28:55.022217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-12 11:28:55.022243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-12 11:28:55.022322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-12 11:28:55.022348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-12 11:28:55.022438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-12 11:28:55.022468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-12 11:28:55.022584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-12 11:28:55.022612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-12 11:28:55.022691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-12 11:28:55.022717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-12 11:28:55.022801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-12 11:28:55.022828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-12 11:28:55.022921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-12 11:28:55.022949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-12 11:28:55.023091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-12 11:28:55.023118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-12 11:28:55.023235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-12 11:28:55.023266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-12 11:28:55.023345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-12 11:28:55.023371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-12 11:28:55.023453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-12 11:28:55.023479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.062 [2024-07-12 11:28:55.023615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.062 [2024-07-12 11:28:55.023641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.062 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-12 11:28:55.023725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-12 11:28:55.023753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-12 11:28:55.023834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-12 11:28:55.023861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-12 11:28:55.023947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-12 11:28:55.023974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-12 11:28:55.024086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-12 11:28:55.024112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-12 11:28:55.024191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-12 11:28:55.024217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-12 11:28:55.024293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-12 11:28:55.024320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-12 11:28:55.024397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-12 11:28:55.024423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-12 11:28:55.024504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-12 11:28:55.024532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-12 11:28:55.024651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-12 11:28:55.024678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-12 11:28:55.024765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-12 11:28:55.024792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-12 11:28:55.024880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-12 11:28:55.024907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-12 11:28:55.024992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-12 11:28:55.025021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-12 11:28:55.025102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-12 11:28:55.025129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-12 11:28:55.025235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-12 11:28:55.025262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-12 11:28:55.025368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-12 11:28:55.025395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-12 11:28:55.025480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-12 11:28:55.025509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-12 11:28:55.025594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-12 11:28:55.025622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-12 11:28:55.025722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-12 11:28:55.025749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-12 11:28:55.025822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-12 11:28:55.025847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-12 11:28:55.025929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-12 11:28:55.025955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-12 11:28:55.026039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-12 11:28:55.026066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-12 11:28:55.026145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-12 11:28:55.026173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-12 11:28:55.026288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-12 11:28:55.026316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-12 11:28:55.026438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-12 11:28:55.026470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-12 11:28:55.026551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-12 11:28:55.026578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-12 11:28:55.026696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-12 11:28:55.026722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-12 11:28:55.026801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-12 11:28:55.026830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-12 11:28:55.026934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-12 11:28:55.026962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-12 11:28:55.027056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-12 11:28:55.027083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-12 11:28:55.027166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-12 11:28:55.027193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-12 11:28:55.027305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-12 11:28:55.027331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-12 11:28:55.027421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-12 11:28:55.027448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-12 11:28:55.027529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-12 11:28:55.027556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-12 11:28:55.027663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-12 11:28:55.027689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-12 11:28:55.027770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-12 11:28:55.027796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-12 11:28:55.027880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-12 11:28:55.027906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-12 11:28:55.028016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-12 11:28:55.028042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-12 11:28:55.028123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-12 11:28:55.028148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-12 11:28:55.028231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-12 11:28:55.028262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-12 11:28:55.028385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-12 11:28:55.028412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.063 qpair failed and we were unable to recover it. 00:24:29.063 [2024-07-12 11:28:55.028518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.063 [2024-07-12 11:28:55.028545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-12 11:28:55.028625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-12 11:28:55.028651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-12 11:28:55.028732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-12 11:28:55.028758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-12 11:28:55.028895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-12 11:28:55.028935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-12 11:28:55.029020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-12 11:28:55.029047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-12 11:28:55.029154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-12 11:28:55.029196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-12 11:28:55.029355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-12 11:28:55.029384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-12 11:28:55.029466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-12 11:28:55.029495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-12 11:28:55.029589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-12 11:28:55.029615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-12 11:28:55.029736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-12 11:28:55.029763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-12 11:28:55.029847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-12 11:28:55.029886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-12 11:28:55.029968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-12 11:28:55.029994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-12 11:28:55.030074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-12 11:28:55.030100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-12 11:28:55.030187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-12 11:28:55.030213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-12 11:28:55.030301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-12 11:28:55.030327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-12 11:28:55.030432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-12 11:28:55.030459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-12 11:28:55.030572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-12 11:28:55.030598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-12 11:28:55.030703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-12 11:28:55.030729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-12 11:28:55.030808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-12 11:28:55.030833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-12 11:28:55.030940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-12 11:28:55.030981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-12 11:28:55.031071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-12 11:28:55.031100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-12 11:28:55.031189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-12 11:28:55.031218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-12 11:28:55.031296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-12 11:28:55.031323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-12 11:28:55.031411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-12 11:28:55.031437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-12 11:28:55.031575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-12 11:28:55.031615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-12 11:28:55.031707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-12 11:28:55.031735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-12 11:28:55.031812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-12 11:28:55.031838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-12 11:28:55.031959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-12 11:28:55.031985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-12 11:28:55.032078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-12 11:28:55.032103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-12 11:28:55.032179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-12 11:28:55.032205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-12 11:28:55.032319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-12 11:28:55.032345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-12 11:28:55.032453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-12 11:28:55.032479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-12 11:28:55.032567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-12 11:28:55.032592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-12 11:28:55.032721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-12 11:28:55.032762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-12 11:28:55.032856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-12 11:28:55.032892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-12 11:28:55.032973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-12 11:28:55.032999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-12 11:28:55.033095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-12 11:28:55.033122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-12 11:28:55.033207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-12 11:28:55.033239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-12 11:28:55.033351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-12 11:28:55.033378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.064 [2024-07-12 11:28:55.033462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.064 [2024-07-12 11:28:55.033489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.064 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-12 11:28:55.033613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-12 11:28:55.033654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-12 11:28:55.033751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-12 11:28:55.033793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-12 11:28:55.033916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-12 11:28:55.033945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-12 11:28:55.034031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-12 11:28:55.034059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-12 11:28:55.034138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-12 11:28:55.034165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-12 11:28:55.034276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-12 11:28:55.034303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-12 11:28:55.034417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-12 11:28:55.034444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-12 11:28:55.034530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-12 11:28:55.034556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-12 11:28:55.034666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-12 11:28:55.034695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-12 11:28:55.034785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-12 11:28:55.034813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-12 11:28:55.034926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-12 11:28:55.034954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-12 11:28:55.035047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-12 11:28:55.035073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-12 11:28:55.035154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-12 11:28:55.035181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-12 11:28:55.035258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-12 11:28:55.035286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-12 11:28:55.035368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-12 11:28:55.035393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-12 11:28:55.035472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-12 11:28:55.035498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-12 11:28:55.035572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-12 11:28:55.035597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-12 11:28:55.035682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-12 11:28:55.035708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-12 11:28:55.035824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-12 11:28:55.035850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-12 11:28:55.035944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-12 11:28:55.035970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-12 11:28:55.036053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-12 11:28:55.036079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-12 11:28:55.036167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-12 11:28:55.036193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-12 11:28:55.036274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-12 11:28:55.036299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-12 11:28:55.036407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-12 11:28:55.036432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-12 11:28:55.036514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-12 11:28:55.036542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-12 11:28:55.036618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-12 11:28:55.036643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-12 11:28:55.036760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-12 11:28:55.036785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-12 11:28:55.036876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-12 11:28:55.036902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-12 11:28:55.036983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-12 11:28:55.037008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-12 11:28:55.037088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-12 11:28:55.037115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-12 11:28:55.037200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-12 11:28:55.037226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-12 11:28:55.037302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-12 11:28:55.037327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-12 11:28:55.037448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-12 11:28:55.037479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-12 11:28:55.037561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-12 11:28:55.037588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-12 11:28:55.037690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-12 11:28:55.037731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-12 11:28:55.037823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-12 11:28:55.037850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-12 11:28:55.037938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-12 11:28:55.037964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.065 [2024-07-12 11:28:55.038050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.065 [2024-07-12 11:28:55.038076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.065 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-12 11:28:55.038168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-12 11:28:55.038195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-12 11:28:55.038279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-12 11:28:55.038304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-12 11:28:55.038385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-12 11:28:55.038410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-12 11:28:55.038494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-12 11:28:55.038524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-12 11:28:55.038636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-12 11:28:55.038663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-12 11:28:55.038777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-12 11:28:55.038805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-12 11:28:55.038886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-12 11:28:55.038913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-12 11:28:55.039000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-12 11:28:55.039026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-12 11:28:55.039133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-12 11:28:55.039160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-12 11:28:55.039267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-12 11:28:55.039293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-12 11:28:55.039382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-12 11:28:55.039409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-12 11:28:55.039489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-12 11:28:55.039516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-12 11:28:55.039599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-12 11:28:55.039627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-12 11:28:55.039715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-12 11:28:55.039745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-12 11:28:55.039859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-12 11:28:55.039890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-12 11:28:55.039971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-12 11:28:55.039997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-12 11:28:55.040083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-12 11:28:55.040109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-12 11:28:55.040215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-12 11:28:55.040241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-12 11:28:55.040317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-12 11:28:55.040342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-12 11:28:55.040425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-12 11:28:55.040451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-12 11:28:55.040564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-12 11:28:55.040594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-12 11:28:55.040703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-12 11:28:55.040730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-12 11:28:55.040823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-12 11:28:55.040864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-12 11:28:55.040969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-12 11:28:55.040997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-12 11:28:55.041075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-12 11:28:55.041101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-12 11:28:55.041210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-12 11:28:55.041236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-12 11:28:55.041313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-12 11:28:55.041339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-12 11:28:55.041431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-12 11:28:55.041459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-12 11:28:55.041602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-12 11:28:55.041629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-12 11:28:55.041737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-12 11:28:55.041765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-12 11:28:55.041848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-12 11:28:55.041879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-12 11:28:55.041993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-12 11:28:55.042019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-12 11:28:55.042104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-12 11:28:55.042130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-12 11:28:55.042241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-12 11:28:55.042267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-12 11:28:55.042389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-12 11:28:55.042416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-12 11:28:55.042497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-12 11:28:55.042523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-12 11:28:55.042616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-12 11:28:55.042656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-12 11:28:55.042754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-12 11:28:55.042795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-12 11:28:55.042891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-12 11:28:55.042919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-12 11:28:55.043020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-12 11:28:55.043047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.066 qpair failed and we were unable to recover it. 00:24:29.066 [2024-07-12 11:28:55.043131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.066 [2024-07-12 11:28:55.043163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-12 11:28:55.043248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-12 11:28:55.043274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-12 11:28:55.043383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-12 11:28:55.043411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-12 11:28:55.043500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-12 11:28:55.043529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-12 11:28:55.043637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-12 11:28:55.043664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-12 11:28:55.043743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-12 11:28:55.043769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-12 11:28:55.043861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-12 11:28:55.043898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-12 11:28:55.043992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-12 11:28:55.044019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-12 11:28:55.044104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-12 11:28:55.044131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-12 11:28:55.044210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-12 11:28:55.044236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-12 11:28:55.044341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-12 11:28:55.044367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-12 11:28:55.044450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-12 11:28:55.044479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-12 11:28:55.044563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-12 11:28:55.044590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-12 11:28:55.044670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-12 11:28:55.044696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-12 11:28:55.044811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-12 11:28:55.044837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-12 11:28:55.044964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-12 11:28:55.044993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-12 11:28:55.045111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-12 11:28:55.045141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-12 11:28:55.045261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-12 11:28:55.045288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-12 11:28:55.045400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-12 11:28:55.045427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-12 11:28:55.045514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-12 11:28:55.045541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-12 11:28:55.045627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-12 11:28:55.045653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-12 11:28:55.045737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-12 11:28:55.045764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-12 11:28:55.045852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-12 11:28:55.045884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-12 11:28:55.046006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-12 11:28:55.046032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-12 11:28:55.046118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-12 11:28:55.046143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-12 11:28:55.046229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-12 11:28:55.046256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-12 11:28:55.046347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-12 11:28:55.046373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-12 11:28:55.046457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-12 11:28:55.046486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-12 11:28:55.046595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-12 11:28:55.046622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-12 11:28:55.046727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-12 11:28:55.046753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-12 11:28:55.046858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-12 11:28:55.046890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-12 11:28:55.046976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-12 11:28:55.047004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-12 11:28:55.047080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-12 11:28:55.047107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-12 11:28:55.047185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-12 11:28:55.047212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-12 11:28:55.047296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-12 11:28:55.047323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-12 11:28:55.047399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-12 11:28:55.047426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-12 11:28:55.047529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-12 11:28:55.047555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-12 11:28:55.047641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-12 11:28:55.047669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-12 11:28:55.047794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-12 11:28:55.047834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-12 11:28:55.047926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-12 11:28:55.047954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-12 11:28:55.048047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-12 11:28:55.048072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.067 qpair failed and we were unable to recover it. 00:24:29.067 [2024-07-12 11:28:55.048188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.067 [2024-07-12 11:28:55.048214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-12 11:28:55.048287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-12 11:28:55.048312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-12 11:28:55.048387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-12 11:28:55.048412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-12 11:28:55.048528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-12 11:28:55.048556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-12 11:28:55.048639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-12 11:28:55.048665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-12 11:28:55.048774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-12 11:28:55.048801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-12 11:28:55.048886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-12 11:28:55.048913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-12 11:28:55.049011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-12 11:28:55.049052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-12 11:28:55.049139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-12 11:28:55.049167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-12 11:28:55.049276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-12 11:28:55.049302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-12 11:28:55.049382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-12 11:28:55.049407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-12 11:28:55.049519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-12 11:28:55.049547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-12 11:28:55.049698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-12 11:28:55.049725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-12 11:28:55.049810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-12 11:28:55.049838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-12 11:28:55.049964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-12 11:28:55.049992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-12 11:28:55.050072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-12 11:28:55.050099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-12 11:28:55.050181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-12 11:28:55.050208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-12 11:28:55.050323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-12 11:28:55.050349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-12 11:28:55.050466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-12 11:28:55.050492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-12 11:28:55.050577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-12 11:28:55.050604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-12 11:28:55.050686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-12 11:28:55.050712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-12 11:28:55.050824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-12 11:28:55.050852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-12 11:28:55.050957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-12 11:28:55.050986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-12 11:28:55.051066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-12 11:28:55.051093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-12 11:28:55.051205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-12 11:28:55.051231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-12 11:28:55.051344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-12 11:28:55.051371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-12 11:28:55.051470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-12 11:28:55.051516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-12 11:28:55.051599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-12 11:28:55.051627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-12 11:28:55.051740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-12 11:28:55.051767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-12 11:28:55.051852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-12 11:28:55.051891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-12 11:28:55.051973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-12 11:28:55.052000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-12 11:28:55.052081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-12 11:28:55.052108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-12 11:28:55.052218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-12 11:28:55.052246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-12 11:28:55.052357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-12 11:28:55.052384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-12 11:28:55.052486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-12 11:28:55.052526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-12 11:28:55.052615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-12 11:28:55.052644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-12 11:28:55.052734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-12 11:28:55.052761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-12 11:28:55.052841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-12 11:28:55.052877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-12 11:28:55.052962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-12 11:28:55.052990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-12 11:28:55.053103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-12 11:28:55.053130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-12 11:28:55.053221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.068 [2024-07-12 11:28:55.053249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.068 qpair failed and we were unable to recover it. 00:24:29.068 [2024-07-12 11:28:55.053333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-12 11:28:55.053360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-12 11:28:55.053496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-12 11:28:55.053523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-12 11:28:55.053599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-12 11:28:55.053626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-12 11:28:55.053708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-12 11:28:55.053735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-12 11:28:55.053811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-12 11:28:55.053838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-12 11:28:55.053925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-12 11:28:55.053952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-12 11:28:55.054035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-12 11:28:55.054063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-12 11:28:55.054176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-12 11:28:55.054203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-12 11:28:55.054293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-12 11:28:55.054320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-12 11:28:55.054406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-12 11:28:55.054433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-12 11:28:55.054510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-12 11:28:55.054537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-12 11:28:55.054656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-12 11:28:55.054682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-12 11:28:55.054779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-12 11:28:55.054820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-12 11:28:55.054920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-12 11:28:55.054949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-12 11:28:55.055032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-12 11:28:55.055059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-12 11:28:55.055178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-12 11:28:55.055205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-12 11:28:55.055284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-12 11:28:55.055310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-12 11:28:55.055431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-12 11:28:55.055458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-12 11:28:55.055536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-12 11:28:55.055562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-12 11:28:55.055646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-12 11:28:55.055673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-12 11:28:55.055752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-12 11:28:55.055778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-12 11:28:55.055885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-12 11:28:55.055912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-12 11:28:55.055998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-12 11:28:55.056026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-12 11:28:55.056146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-12 11:28:55.056176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-12 11:28:55.056266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-12 11:28:55.056293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-12 11:28:55.056421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-12 11:28:55.056468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-12 11:28:55.056556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-12 11:28:55.056583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-12 11:28:55.056666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-12 11:28:55.056692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-12 11:28:55.056801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-12 11:28:55.056827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-12 11:28:55.056911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-12 11:28:55.056938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-12 11:28:55.057026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-12 11:28:55.057052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-12 11:28:55.057130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-12 11:28:55.057157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-12 11:28:55.057236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-12 11:28:55.057262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-12 11:28:55.057347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-12 11:28:55.057376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-12 11:28:55.057492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-12 11:28:55.057520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.069 [2024-07-12 11:28:55.057596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.069 [2024-07-12 11:28:55.057623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.069 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-12 11:28:55.057735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-12 11:28:55.057762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-12 11:28:55.057841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-12 11:28:55.057873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-12 11:28:55.057955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-12 11:28:55.057981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-12 11:28:55.058104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-12 11:28:55.058132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-12 11:28:55.058214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-12 11:28:55.058240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-12 11:28:55.058324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-12 11:28:55.058351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-12 11:28:55.058437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-12 11:28:55.058463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-12 11:28:55.058539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-12 11:28:55.058565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-12 11:28:55.058691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-12 11:28:55.058731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-12 11:28:55.058854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-12 11:28:55.058887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-12 11:28:55.058971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-12 11:28:55.058998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-12 11:28:55.059111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-12 11:28:55.059138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-12 11:28:55.059217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-12 11:28:55.059244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-12 11:28:55.059328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-12 11:28:55.059357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-12 11:28:55.059444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-12 11:28:55.059472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-12 11:28:55.059581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-12 11:28:55.059608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-12 11:28:55.059722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-12 11:28:55.059749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-12 11:28:55.059837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-12 11:28:55.059862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-12 11:28:55.059956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-12 11:28:55.059983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-12 11:28:55.060067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-12 11:28:55.060094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-12 11:28:55.060172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-12 11:28:55.060198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-12 11:28:55.060279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-12 11:28:55.060306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-12 11:28:55.060382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-12 11:28:55.060408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-12 11:28:55.060520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-12 11:28:55.060548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-12 11:28:55.060645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-12 11:28:55.060685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-12 11:28:55.060773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-12 11:28:55.060800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-12 11:28:55.060889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-12 11:28:55.060917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-12 11:28:55.061020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-12 11:28:55.061047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-12 11:28:55.061129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-12 11:28:55.061155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-12 11:28:55.061246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-12 11:28:55.061272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-12 11:28:55.061359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-12 11:28:55.061384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-12 11:28:55.061480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-12 11:28:55.061506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-12 11:28:55.061580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-12 11:28:55.061606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-12 11:28:55.061693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-12 11:28:55.061723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-12 11:28:55.061799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-12 11:28:55.061825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-12 11:28:55.061919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-12 11:28:55.061949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-12 11:28:55.062031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-12 11:28:55.062057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-12 11:28:55.062140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-12 11:28:55.062166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-12 11:28:55.062249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-12 11:28:55.062275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-12 11:28:55.062352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.070 [2024-07-12 11:28:55.062379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.070 qpair failed and we were unable to recover it. 00:24:29.070 [2024-07-12 11:28:55.062459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-12 11:28:55.062486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-12 11:28:55.062568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-12 11:28:55.062595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-12 11:28:55.062680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-12 11:28:55.062707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-12 11:28:55.062801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-12 11:28:55.062827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-12 11:28:55.062926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-12 11:28:55.062954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-12 11:28:55.063036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-12 11:28:55.063061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-12 11:28:55.063138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-12 11:28:55.063164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-12 11:28:55.063245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-12 11:28:55.063270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-12 11:28:55.063351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-12 11:28:55.063377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-12 11:28:55.063491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-12 11:28:55.063516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-12 11:28:55.063597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-12 11:28:55.063626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-12 11:28:55.063724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-12 11:28:55.063752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-12 11:28:55.063833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-12 11:28:55.063860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-12 11:28:55.063969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-12 11:28:55.063995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-12 11:28:55.064077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-12 11:28:55.064103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-12 11:28:55.064181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-12 11:28:55.064208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-12 11:28:55.064297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-12 11:28:55.064328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-12 11:28:55.064416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-12 11:28:55.064442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-12 11:28:55.064531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-12 11:28:55.064566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-12 11:28:55.064665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-12 11:28:55.064693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-12 11:28:55.064782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-12 11:28:55.064810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-12 11:28:55.064901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-12 11:28:55.064932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-12 11:28:55.065025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-12 11:28:55.065053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-12 11:28:55.065141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-12 11:28:55.065170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-12 11:28:55.065253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-12 11:28:55.065282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-12 11:28:55.065365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-12 11:28:55.065393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-12 11:28:55.065479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-12 11:28:55.065506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-12 11:28:55.065588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-12 11:28:55.065615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-12 11:28:55.065734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-12 11:28:55.065766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-12 11:28:55.065860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-12 11:28:55.065901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-12 11:28:55.065991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-12 11:28:55.066019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-12 11:28:55.066102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-12 11:28:55.066129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-12 11:28:55.066222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-12 11:28:55.066248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-12 11:28:55.066328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-12 11:28:55.066356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-12 11:28:55.066441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-12 11:28:55.066470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-12 11:28:55.066550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-12 11:28:55.066577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-12 11:28:55.066655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-12 11:28:55.066682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-12 11:28:55.066765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-12 11:28:55.066792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-12 11:28:55.066892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-12 11:28:55.066920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-12 11:28:55.066999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-12 11:28:55.067026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.071 [2024-07-12 11:28:55.067109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.071 [2024-07-12 11:28:55.067135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.071 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-12 11:28:55.067222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-12 11:28:55.067249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-12 11:28:55.067332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-12 11:28:55.067358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-12 11:28:55.067442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-12 11:28:55.067476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-12 11:28:55.067551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-12 11:28:55.067578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-12 11:28:55.067659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-12 11:28:55.067685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-12 11:28:55.067795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-12 11:28:55.067821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-12 11:28:55.067918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-12 11:28:55.067960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-12 11:28:55.068049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-12 11:28:55.068078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-12 11:28:55.068166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-12 11:28:55.068192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-12 11:28:55.068273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-12 11:28:55.068300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-12 11:28:55.068412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-12 11:28:55.068438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-12 11:28:55.068536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-12 11:28:55.068562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-12 11:28:55.068648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-12 11:28:55.068674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-12 11:28:55.068754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-12 11:28:55.068782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-12 11:28:55.068879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-12 11:28:55.068906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-12 11:28:55.068991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-12 11:28:55.069018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-12 11:28:55.069104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-12 11:28:55.069131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-12 11:28:55.069213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-12 11:28:55.069239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-12 11:28:55.069340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-12 11:28:55.069367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-12 11:28:55.069506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-12 11:28:55.069532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-12 11:28:55.069610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-12 11:28:55.069637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-12 11:28:55.069717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-12 11:28:55.069744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-12 11:28:55.069827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-12 11:28:55.069859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-12 11:28:55.069964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-12 11:28:55.069991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-12 11:28:55.070084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-12 11:28:55.070113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-12 11:28:55.070195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-12 11:28:55.070223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-12 11:28:55.070309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-12 11:28:55.070336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-12 11:28:55.070423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-12 11:28:55.070452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-12 11:28:55.070534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-12 11:28:55.070560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-12 11:28:55.070688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-12 11:28:55.070729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-12 11:28:55.070824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-12 11:28:55.070851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-12 11:28:55.070940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-12 11:28:55.070968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-12 11:28:55.071047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-12 11:28:55.071074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-12 11:28:55.071152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-12 11:28:55.071178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-12 11:28:55.071264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-12 11:28:55.071291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-12 11:28:55.071370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-12 11:28:55.071398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-12 11:28:55.071478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-12 11:28:55.071505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-12 11:28:55.071600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-12 11:28:55.071627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-12 11:28:55.071701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-12 11:28:55.071727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-12 11:28:55.071804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.072 [2024-07-12 11:28:55.071830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.072 qpair failed and we were unable to recover it. 00:24:29.072 [2024-07-12 11:28:55.071917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-12 11:28:55.071945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-12 11:28:55.072025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-12 11:28:55.072052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-12 11:28:55.072141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-12 11:28:55.072173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-12 11:28:55.072262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-12 11:28:55.072289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-12 11:28:55.072375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-12 11:28:55.072402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-12 11:28:55.072517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-12 11:28:55.072543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-12 11:28:55.072622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-12 11:28:55.072649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-12 11:28:55.072761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-12 11:28:55.072801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-12 11:28:55.072890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-12 11:28:55.072917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-12 11:28:55.073003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-12 11:28:55.073029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-12 11:28:55.073114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-12 11:28:55.073140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-12 11:28:55.073220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-12 11:28:55.073245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-12 11:28:55.073357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-12 11:28:55.073385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-12 11:28:55.073471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-12 11:28:55.073498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-12 11:28:55.073581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-12 11:28:55.073608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-12 11:28:55.073723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-12 11:28:55.073749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-12 11:28:55.073838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-12 11:28:55.073864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-12 11:28:55.073959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-12 11:28:55.073985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-12 11:28:55.074066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-12 11:28:55.074093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-12 11:28:55.074187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-12 11:28:55.074216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-12 11:28:55.074326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-12 11:28:55.074352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-12 11:28:55.074430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-12 11:28:55.074457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-12 11:28:55.074496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5d0e0 (9): Bad file descriptor 00:24:29.073 [2024-07-12 11:28:55.074591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-12 11:28:55.074618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-12 11:28:55.074700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-12 11:28:55.074727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-12 11:28:55.074803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-12 11:28:55.074829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-12 11:28:55.074933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-12 11:28:55.074961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-12 11:28:55.075041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-12 11:28:55.075067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-12 11:28:55.075146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-12 11:28:55.075172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-12 11:28:55.075255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-12 11:28:55.075281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-12 11:28:55.075369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-12 11:28:55.075397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-12 11:28:55.075482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-12 11:28:55.075508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-12 11:28:55.075600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-12 11:28:55.075625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-12 11:28:55.075705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-12 11:28:55.075730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-12 11:28:55.075815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-12 11:28:55.075840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-12 11:28:55.075929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-12 11:28:55.075955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.073 qpair failed and we were unable to recover it. 00:24:29.073 [2024-07-12 11:28:55.076040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.073 [2024-07-12 11:28:55.076067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-12 11:28:55.076162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-12 11:28:55.076189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-12 11:28:55.076270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-12 11:28:55.076296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-12 11:28:55.076372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-12 11:28:55.076398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-12 11:28:55.076506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-12 11:28:55.076533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-12 11:28:55.076612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-12 11:28:55.076638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-12 11:28:55.076715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-12 11:28:55.076742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-12 11:28:55.076827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-12 11:28:55.076855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-12 11:28:55.076963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-12 11:28:55.076989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-12 11:28:55.077077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-12 11:28:55.077103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-12 11:28:55.077230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-12 11:28:55.077261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-12 11:28:55.077381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-12 11:28:55.077415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-12 11:28:55.077501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-12 11:28:55.077528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-12 11:28:55.077612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-12 11:28:55.077641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-12 11:28:55.077728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-12 11:28:55.077755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-12 11:28:55.077834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-12 11:28:55.077862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-12 11:28:55.077954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-12 11:28:55.077979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-12 11:28:55.078071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-12 11:28:55.078097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-12 11:28:55.078175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-12 11:28:55.078200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-12 11:28:55.078283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-12 11:28:55.078309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-12 11:28:55.078390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-12 11:28:55.078421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-12 11:28:55.078541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-12 11:28:55.078566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-12 11:28:55.078649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-12 11:28:55.078678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-12 11:28:55.078758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-12 11:28:55.078785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-12 11:28:55.078875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-12 11:28:55.078904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-12 11:28:55.078987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-12 11:28:55.079014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-12 11:28:55.079096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-12 11:28:55.079124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-12 11:28:55.079209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-12 11:28:55.079236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-12 11:28:55.079353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-12 11:28:55.079379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-12 11:28:55.079456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-12 11:28:55.079483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-12 11:28:55.079569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-12 11:28:55.079596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-12 11:28:55.079683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-12 11:28:55.079710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-12 11:28:55.079793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-12 11:28:55.079818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-12 11:28:55.079913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-12 11:28:55.079940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-12 11:28:55.080027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-12 11:28:55.080053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-12 11:28:55.080131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-12 11:28:55.080156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-12 11:28:55.080234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-12 11:28:55.080260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-12 11:28:55.080342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-12 11:28:55.080367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-12 11:28:55.080454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-12 11:28:55.080480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-12 11:28:55.080586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-12 11:28:55.080612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-12 11:28:55.080692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.074 [2024-07-12 11:28:55.080718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.074 qpair failed and we were unable to recover it. 00:24:29.074 [2024-07-12 11:28:55.080791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-12 11:28:55.080816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-12 11:28:55.080901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-12 11:28:55.080927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-12 11:28:55.081015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-12 11:28:55.081040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-12 11:28:55.081119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-12 11:28:55.081145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-12 11:28:55.081233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-12 11:28:55.081259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-12 11:28:55.081335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-12 11:28:55.081360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-12 11:28:55.081447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-12 11:28:55.081478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-12 11:28:55.081554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-12 11:28:55.081579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-12 11:28:55.081669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-12 11:28:55.081694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-12 11:28:55.081777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-12 11:28:55.081802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-12 11:28:55.081891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-12 11:28:55.081917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-12 11:28:55.082001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-12 11:28:55.082030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-12 11:28:55.082124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-12 11:28:55.082151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-12 11:28:55.082236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-12 11:28:55.082263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-12 11:28:55.082344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-12 11:28:55.082371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-12 11:28:55.082479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-12 11:28:55.082506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-12 11:28:55.082590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-12 11:28:55.082617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-12 11:28:55.082702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-12 11:28:55.082729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-12 11:28:55.082822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-12 11:28:55.082862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-12 11:28:55.082991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-12 11:28:55.083020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-12 11:28:55.083118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-12 11:28:55.083146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-12 11:28:55.083228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-12 11:28:55.083254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-12 11:28:55.083337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-12 11:28:55.083363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-12 11:28:55.083442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-12 11:28:55.083468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-12 11:28:55.083558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-12 11:28:55.083584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-12 11:28:55.083658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-12 11:28:55.083683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-12 11:28:55.083782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-12 11:28:55.083822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-12 11:28:55.083922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-12 11:28:55.083951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-12 11:28:55.084033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-12 11:28:55.084060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-12 11:28:55.084148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-12 11:28:55.084175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-12 11:28:55.084264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-12 11:28:55.084291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-12 11:28:55.084367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-12 11:28:55.084393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-12 11:28:55.084474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-12 11:28:55.084501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-12 11:28:55.084590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-12 11:28:55.084621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-12 11:28:55.084707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-12 11:28:55.084734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-12 11:28:55.084815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-12 11:28:55.084842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-12 11:28:55.084946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-12 11:28:55.084977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-12 11:28:55.085061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-12 11:28:55.085089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-12 11:28:55.085173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-12 11:28:55.085200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-12 11:28:55.085286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-12 11:28:55.085311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.075 qpair failed and we were unable to recover it. 00:24:29.075 [2024-07-12 11:28:55.085392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.075 [2024-07-12 11:28:55.085418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-12 11:28:55.085491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-12 11:28:55.085517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-12 11:28:55.085627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-12 11:28:55.085652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-12 11:28:55.085736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-12 11:28:55.085762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-12 11:28:55.085839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-12 11:28:55.085875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-12 11:28:55.085968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-12 11:28:55.085994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-12 11:28:55.086088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-12 11:28:55.086115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-12 11:28:55.086205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-12 11:28:55.086231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-12 11:28:55.086318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-12 11:28:55.086346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-12 11:28:55.086438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-12 11:28:55.086479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-12 11:28:55.086566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-12 11:28:55.086594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-12 11:28:55.086681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-12 11:28:55.086709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-12 11:28:55.086807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-12 11:28:55.086835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-12 11:28:55.086935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-12 11:28:55.086965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-12 11:28:55.087052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-12 11:28:55.087079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-12 11:28:55.087162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-12 11:28:55.087192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-12 11:28:55.087286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-12 11:28:55.087313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-12 11:28:55.087400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-12 11:28:55.087438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-12 11:28:55.087544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-12 11:28:55.087573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-12 11:28:55.087660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-12 11:28:55.087686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-12 11:28:55.087773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-12 11:28:55.087804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-12 11:28:55.087884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-12 11:28:55.087911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-12 11:28:55.087996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-12 11:28:55.088023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-12 11:28:55.088106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-12 11:28:55.088133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-12 11:28:55.088244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-12 11:28:55.088270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-12 11:28:55.088346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-12 11:28:55.088371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-12 11:28:55.088452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-12 11:28:55.088478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-12 11:28:55.088557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-12 11:28:55.088583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-12 11:28:55.088668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-12 11:28:55.088695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-12 11:28:55.088776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-12 11:28:55.088802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-12 11:28:55.088917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-12 11:28:55.088945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-12 11:28:55.089025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-12 11:28:55.089054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-12 11:28:55.089134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-12 11:28:55.089161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-12 11:28:55.089268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-12 11:28:55.089293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-12 11:28:55.089385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-12 11:28:55.089411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-12 11:28:55.089504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-12 11:28:55.089534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-12 11:28:55.089620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-12 11:28:55.089646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-12 11:28:55.089735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-12 11:28:55.089765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-12 11:28:55.089851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-12 11:28:55.089884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-12 11:28:55.089971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-12 11:28:55.089998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-12 11:28:55.090087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.076 [2024-07-12 11:28:55.090118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.076 qpair failed and we were unable to recover it. 00:24:29.076 [2024-07-12 11:28:55.090198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-12 11:28:55.090225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-12 11:28:55.090345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-12 11:28:55.090372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-12 11:28:55.090453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-12 11:28:55.090482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-12 11:28:55.090562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-12 11:28:55.090588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-12 11:28:55.090666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-12 11:28:55.090695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-12 11:28:55.090781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-12 11:28:55.090808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.077 [2024-07-12 11:28:55.090919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.077 [2024-07-12 11:28:55.090952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.077 qpair failed and we were unable to recover it. 00:24:29.378 [2024-07-12 11:28:55.091045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.378 [2024-07-12 11:28:55.091073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.378 qpair failed and we were unable to recover it. 00:24:29.378 [2024-07-12 11:28:55.091160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.378 [2024-07-12 11:28:55.091189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.378 qpair failed and we were unable to recover it. 00:24:29.378 [2024-07-12 11:28:55.091299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.378 [2024-07-12 11:28:55.091326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.378 qpair failed and we were unable to recover it. 00:24:29.378 [2024-07-12 11:28:55.091411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.378 [2024-07-12 11:28:55.091437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.378 qpair failed and we were unable to recover it. 00:24:29.378 [2024-07-12 11:28:55.091527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.091555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.091646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.091675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.091772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.091812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.091912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.091942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.092039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.092065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.092153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.092179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.092262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.092288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.092377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.092403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.092483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.092510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.092601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.092632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.092712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.092739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.092817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.092844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.092935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.092963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.093039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.093066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.093152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.093180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.093264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.093291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.093373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.093402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.093484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.093512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.093606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.093634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.093714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.093742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.093840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.093885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.093976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.094002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.094093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.094121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.094207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.094233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.094317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.094343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.094425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.094451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.094526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.094559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.094645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.094673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.094762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.094790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.094876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.094906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.095006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.095033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.095116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.095148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.095231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.095257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.095339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.095366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.095445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.095472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.095556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.095590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.095678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.095707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.095788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.095816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.095912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.095940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.096027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.096054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.096138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.096163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.096246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.096272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.096360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.096385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.096479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.096505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.096602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.096642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.096733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.096761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.096845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.096878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.096970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.096996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.097084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.097111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.379 qpair failed and we were unable to recover it. 00:24:29.379 [2024-07-12 11:28:55.097192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.379 [2024-07-12 11:28:55.097219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.097303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.097330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.097409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.097435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.097522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.097548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.097629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.097655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.097755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.097781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.097863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.097896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.097976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.098002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.098124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.098151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.098258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.098284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.098362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.098389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.098481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.098508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.098595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.098621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.098702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.098732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.098817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.098849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.098952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.098982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.099071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.099098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.099199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.099228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.099343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.099369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.099448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.099475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.099564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.099592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.099676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.099704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.099819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.099848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.099934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.099961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.100035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.100061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.100145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.100171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.100273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.100301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.100388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.100417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.100504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.100532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.100614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.100640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.100726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.100753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.100834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.100863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.100960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.100986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.101069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.101095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.101176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.101203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.101309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.101335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.101418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.101447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.101536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.101564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.101658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.101686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.101771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.101798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.101885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.101913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.101996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.102023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.102101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.102127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.102241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.102268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.102361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.102401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.102488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.102516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.102596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.102622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.102703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.102730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.102811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.102836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.102924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.102950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.103036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.103064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.103160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.103189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.103271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.380 [2024-07-12 11:28:55.103299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.380 qpair failed and we were unable to recover it. 00:24:29.380 [2024-07-12 11:28:55.103387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.103414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.103531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.103557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.103639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.103665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.103747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.103773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.103863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.103899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.103989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.104016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.104102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.104130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.104213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.104239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.104326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.104355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.104439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.104467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.104562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.104590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.104671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.104697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.104775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.104802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.104888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.104915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.105002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.105028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.105104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.105130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.105210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.105236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.105318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.105345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.105430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.105456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.105537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.105564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.105644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.105671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.105748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.105774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.105887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.105927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.106027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.106055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.106139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.106166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.106246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.106272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.106358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.106387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.106473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.106506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.106595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.106624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.106703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.106729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.106811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.106838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.106926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.106953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.107037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.107063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.107141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.107167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.107248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.107275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.107354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.107382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.107474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.107500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.107577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.107603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.107692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.107716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.107797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.107822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.107908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.107937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.108025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.108052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.108129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.108155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.108283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.108310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.108390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.108417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.108494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.108520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.108605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.108631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.108717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.108744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.108829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.108855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.108947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.108974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.109064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.109090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.109168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.109194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.109273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.381 [2024-07-12 11:28:55.109299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.381 qpair failed and we were unable to recover it. 00:24:29.381 [2024-07-12 11:28:55.109393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.109420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.109513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.109542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.109629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.109656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.109736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.109763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.109846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.109879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.109968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.109994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.110087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.110113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.110190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.110216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.110296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.110323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.110429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.110456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.110542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.110570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.110685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.110713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.110803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.110832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.110920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.110947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.111031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.111057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.111145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.111171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.111252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.111277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.111370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.111399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.111482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.111511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.111595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.111622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.111700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.111726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.111805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.111832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.111925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.111952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.112062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.112089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.112180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.112206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.112286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.112313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.112394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.112421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.112505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.112533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.112624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.112651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.112727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.112754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.112838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.112864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.112953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.112979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.113057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.113083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.113167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.113193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.113317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.113343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.113420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.113446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.113532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.113558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.113643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.113669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.113748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.113775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.113856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.113890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.113977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.114006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.114092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.114123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.114206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.114233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.114317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.114344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.114429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.382 [2024-07-12 11:28:55.114456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.382 qpair failed and we were unable to recover it. 00:24:29.382 [2024-07-12 11:28:55.114563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.114589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.114672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.114699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.114822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.114862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.114964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.114992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.115075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.115102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.115214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.115240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.115319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.115345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.115429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.115455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.115566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.115592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.115688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.115728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.115825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.115856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.115955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.115981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.116069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.116097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.116197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.116225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.116314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.116341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.116426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.116453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.116540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.116570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.116651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.116679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.116766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.116795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.116887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.116915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.117000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.117027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.117107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.117134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.117216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.117242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.117365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.117393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.117481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.117509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.117596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.117623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.117704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.117730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.117813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.117839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.117928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.117955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.118048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.118088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.118181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.118207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.118296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.118322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.118397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.118423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.118502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.118528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.118606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.118631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.118710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.118735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.118815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.118845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.118932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.118961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.119046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.119074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.119157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.119185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.119276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.119303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.119400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.119427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.119538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.119565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.119647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.119674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.119752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.119781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.119859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.119891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.119976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.120003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.120082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.120108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.120190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.120218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.120299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.120325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.120426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.120452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.120556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.120596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.120714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.383 [2024-07-12 11:28:55.120742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.383 qpair failed and we were unable to recover it. 00:24:29.383 [2024-07-12 11:28:55.120835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.120863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.120957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.120984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.121065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.121091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.121200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.121227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.121312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.121339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.121423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.121452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.121537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.121565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.121666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.121695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.121782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.121809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.121907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.121935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.122010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.122041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.122127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.122153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.122259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.122285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.122369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.122397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.122472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.122500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.122594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.122635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.122728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.122755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.122835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.122861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.122965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.122992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.123079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.123106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.123205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.123231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.123321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.123347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.123431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.123460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.123563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.123589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.123698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.123726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.123839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.123874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.123953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.123980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.124064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.124091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.124209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.124236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.124362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.124388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.124468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.124494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.124581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.124608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.124699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.124727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.124822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.124881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.124983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.125013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.125097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.125124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.125219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.125245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.125331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.125362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.125474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.125500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.125617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.125643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.125736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.125763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.125848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.125887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.125970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.125997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.126082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.126108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.126222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.126248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.126354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.126380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.126464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.126494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.126575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.126603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.126680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.126717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.126837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.126883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.126975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.127007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.127091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.127119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.127213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.384 [2024-07-12 11:28:55.127241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.384 qpair failed and we were unable to recover it. 00:24:29.384 [2024-07-12 11:28:55.127354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.127380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.127473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.127500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.127598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.127624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.127708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.127734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.127857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.127909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.127997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.128025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.128106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.128133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.128248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.128275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.128383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.128409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.128491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.128517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.128600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.128627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.128726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.128754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.128872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.128913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.128999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.129029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.129114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.129141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.129233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.129260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.129367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.129393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.129506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.129533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.129619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.129647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.129745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.129784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.129878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.129907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.129998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.130025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.130111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.130139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.130259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.130285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.130399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.130428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.130516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.130543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.130634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.130661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.130753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.130780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.130882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.130909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.130991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.131018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.131103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.131131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.131249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.131275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.131388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.131428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.131529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.131556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.131638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.131664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.131741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.131766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.131850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.131961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.132041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.132072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.132154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.132179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.132255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.132281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.132361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.132393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.132471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.132496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.132569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.132594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.132700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.132739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.132845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.132891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.133019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.133058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.133148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.133188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.385 qpair failed and we were unable to recover it. 00:24:29.385 [2024-07-12 11:28:55.133321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.385 [2024-07-12 11:28:55.133358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.133475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.133501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.133589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.133616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.133703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.133729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.133836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.133862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.133987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.134012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.134094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.134122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.134212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.134238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.134315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.134341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.134441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.134471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.134561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.134590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.134727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.134754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.134834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.134862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.134951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.134978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.135054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.135082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.135196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.135223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.135308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.135335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.135416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.135447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.135526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.135552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.135654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.135681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.135763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.135790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.135910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.135938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.136017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.136043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.136150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.136177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.136291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.136317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.136397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.136423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.136509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.136535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.136639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.136665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.136743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.136768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.136849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.136893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.136975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.137001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.137092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.137117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.137209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.137246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.137326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.137354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.137451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.137490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.137586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.137626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.137731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.137759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.137845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.137891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.137979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.138006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.138095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.138120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.138211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.138237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.138330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.138358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.138441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.138468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.138546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.138583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.138683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.138713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.138794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.138820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.138915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.138942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.139023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.139050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.139129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.139155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.139249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.139275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.139362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.139391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.139506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.139534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.139618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.139655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.139739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.386 [2024-07-12 11:28:55.139766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.386 qpair failed and we were unable to recover it. 00:24:29.386 [2024-07-12 11:28:55.139848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.139885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.139969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.139996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.140083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.140110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.140207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.140234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.140358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.140384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.140498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.140526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.140625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.140651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.140745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.140772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.140849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.140886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.140965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.140991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.141073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.141098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.141183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.141211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.141293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.141320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.141409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.141435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.141515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.141543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.141628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.141655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.141787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.141814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.141897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.141929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.142019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.142046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.142131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.142159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.142243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.142279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.142354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.142381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.142497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.142533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.142608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.142635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.142717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.142743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.142838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.142898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.142989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.143016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.143093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.143119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.143210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.143248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.143327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.143353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.143455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.143484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.143574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.143603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.143705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.143734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.143811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.143837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.143923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.143952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.144041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.144069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.144146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.144173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.144283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.144309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.144410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.144438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.144527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.144554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.144661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.144701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.144790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.144817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.144912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.144940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.145027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.145054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.145139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.145179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.145303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.145329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.145411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.145438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.145513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.145539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.145613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.145639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.145714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.145740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.145833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.145880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.145969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.145999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.146106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.387 [2024-07-12 11:28:55.146145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.387 qpair failed and we were unable to recover it. 00:24:29.387 [2024-07-12 11:28:55.146248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.146275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.146358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.146385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.146470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.146496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.146573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.146599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.146679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.146705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.146791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.146820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.146954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.146981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.147062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.147088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.147166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.147202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.147285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.147314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.147409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.147449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.147544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.147574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.147660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.147686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.147770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.147795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.147877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.147904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.147980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.148007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.148086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.148112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.148202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.148228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.148317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.148345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.148432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.148459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.148553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.148582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.148663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.148691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.148772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.148798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.148897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.148926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.149014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.149042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.149122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.149148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.149260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.149285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.149366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.149400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.149487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.149513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.149592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.149617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.149710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.149738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.149855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.149901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.150021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.150048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.150126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.150152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.150250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.150278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.150365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.150391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.150480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.150506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.150621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.150648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.150723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.150749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.150878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.150907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.150988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.151014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.151096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.151122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.151212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.151238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.151319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.151344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.151423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.151449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.151538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.151567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.388 [2024-07-12 11:28:55.151653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.388 [2024-07-12 11:28:55.151683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.388 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.151773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.151810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.151909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.151936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.152027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.152054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.152146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.152194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.152284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.152311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.152392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.152421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.152503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.152530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.152628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.152654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.152732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.152758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.152848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.152890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.152972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.153000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.153086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.153117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.153229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.153255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.153335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.153361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.153445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.153473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.153553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.153580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.153727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.153767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.153859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.153897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.153986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.154013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.154102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.154128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.154206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.154237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.154324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.154351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.154436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.154464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.154570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.154597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.154719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.154745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.154842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.154880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.154963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.154989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.155079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.155106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.155220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.155249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.155331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.155360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.155445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.155472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.155573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.155599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.155679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.155705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.155805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.155856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.155958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.155985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.156077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.156103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.156184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.156222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.156308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.156334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.156416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.156451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.156534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.156562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.156645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.156673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.156759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.156786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.156885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.156913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.156991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.157018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.157107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.157134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.157234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.157261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.157343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.157371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.157453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.157479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.157573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.157603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.157687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.157714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.157800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.157828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.157924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.157952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.158050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.158076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.158167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.158193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.158319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.158344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.158426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.158457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.158541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.158567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.158663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.158703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.158797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.158824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.158919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.158947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.159029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.159056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.159131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.159158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.159237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.159263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.159340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.159367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.159451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.159484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.159575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.159603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.159688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.159715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.159794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.159820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.159923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.159963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.160055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.160084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.160203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.160240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.160318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.160344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.160444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.160470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.160547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.160572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.160663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.160691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.160775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.160804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.389 [2024-07-12 11:28:55.160902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.389 [2024-07-12 11:28:55.160932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.389 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.161016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.161044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.161132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.161163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.161265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.161291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.161368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.161394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.161503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.161529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.161620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.161647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.161728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.161754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.161879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.161914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.161989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.162015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.162102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.162128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.162212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.162238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.162312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.162338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.162416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.162445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.162530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.162558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.162639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.162665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.162749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.162775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.162859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.162897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.163008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.163035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.163126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.163152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.163237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.163266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.163349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.163376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.163469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.163495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.163573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.163599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.163684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.163710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.163791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.163818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.163925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.163952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.164036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.164062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.164151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.164177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.164259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.164290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.164386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.164426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.164506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.164533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.164655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.164682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.164792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.164819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.164917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.164943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.165027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.165053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.165130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.165165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.165252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.165279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.165366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.165394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.165486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.165514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.165598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.165628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.165744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.165771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.165846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.165886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.165976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.166003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.166086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.166112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.166200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.166226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.166317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.166354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.166429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.166455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.166532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.166558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.166659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.166685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.166803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.166833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.166933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.166960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.167047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.167074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.167150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.167185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.167267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.167293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.167373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.167398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.167482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.167510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.167612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.167653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.167761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.167801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.167908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.167937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.168020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.168047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.168133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.168169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.168255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.168283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.168358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.168384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.168469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.168495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.168573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.168599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.168681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.168708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.168789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.168816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.168903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.168929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.169016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.169047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.169222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.169249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.169331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.169357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.169436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.169465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.169549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.169576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.169667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.169707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.169808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.169842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.169944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.169971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.170078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.170104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.170191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.390 [2024-07-12 11:28:55.170226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.390 qpair failed and we were unable to recover it. 00:24:29.390 [2024-07-12 11:28:55.170303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.170329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.170431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.170459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.170573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.170613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.170701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.170729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.170825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.170852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.170946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.170972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.171064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.171090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.171181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.171206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.171292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.171320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.171405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.171432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.171533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.171561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.171645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.171674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.171756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.171783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.171886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.171914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.171996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.172024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.172120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.172147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.172230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.172260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.172336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.172367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.172447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.172474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.172568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.172596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.172724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.172764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.172899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.172927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.173016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.173043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.173126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.173152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.173239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.173264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.173348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.173377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.173459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.173486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.173566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.173593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.173671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.173697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.173780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.173806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.173886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.173913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.174001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.174027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.174139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.174165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.174243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.174269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.174350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.174375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.174450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.174476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.174563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.174602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.174683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.174711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.174787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.174816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.174918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.174946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.175030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.175057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.175146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.175173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.175252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.175278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.175362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.175388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.175473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.175501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.175577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.175605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.175682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.175708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.175794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.175820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.175894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.175920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.175997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.176033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.176108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.176133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.176239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.176265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.176373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.176399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.176476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.176504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.176590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.176616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.176699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.176734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.176824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.176852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.176955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.176995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.177090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.177117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.177203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.177230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.177317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.177343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.177421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.177447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.177522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.177548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.177627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.177653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.177736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.177762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.177836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.177871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.177960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.177986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.178064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.178090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.178168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.178194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.178278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.178304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.178391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.178423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.178515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.391 [2024-07-12 11:28:55.178542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.391 qpair failed and we were unable to recover it. 00:24:29.391 [2024-07-12 11:28:55.178633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.178659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.178740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.178766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.178843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.178876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.178961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.178986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.179073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.179099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.179209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.179234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.179315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.179341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.179420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.179446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.179529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.179555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.179629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.179654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.179733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.179761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.179844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.179880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.179971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.179999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.180111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.180137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.180213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.180239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.180331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.180358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.180451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.180478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.180552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.180593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.180673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.180703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.180797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.180824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.180927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.180954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.181045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.181071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.181153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.181179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.181262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.181288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.181376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.181404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.181486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.181513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.181600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.181628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.181715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.181742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.181825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.181852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.181941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.181968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.182043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.182070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.182177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.182203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.182287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.182313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.182409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.182438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.182539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.182565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.182664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.182690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.182797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.182824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.182906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.182933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.183024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.183054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.183139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.183167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.183255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.183283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.183372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.183398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.183488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.183514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.183605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.183632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.183714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.183741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.183827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.183855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.183948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.183975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.184060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.184088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.184167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.184194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.184286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.184316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.184403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.184431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.184522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.184550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.184630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.184656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.184746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.184773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.184862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.184894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.185005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.185032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.185112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.185138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.185215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.185241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.185319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.185345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.185429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.185456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.185576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.185603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.185691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.185717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.185801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.185832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.185919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.185946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.186033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.186062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.186158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.186198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.186289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.186322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.186409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.186435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.186524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.186552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.186629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.186655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.186746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.186774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.186886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.186916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.187019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.187048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.187134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.187167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.187253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.187279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.187367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.187395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.187475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.187504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.187599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.392 [2024-07-12 11:28:55.187626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.392 qpair failed and we were unable to recover it. 00:24:29.392 [2024-07-12 11:28:55.187707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.187733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.187822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.187849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.187951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.187979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.188062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.188094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.188168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.188193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.188300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.188326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.188410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.188436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.188520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.188549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.188628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.188659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.188758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.188798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.188893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.188921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.189002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.189028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.189112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.189138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.189215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.189241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.189325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.189351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.189430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.189459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.189542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.189572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.189683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.189710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.189800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.189826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.189918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.189945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.190023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.190050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.190136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.190162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.190246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.190274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.190350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.190376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.190455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.190483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.190591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.190639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.190738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.190771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.190872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.190902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.190990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.191026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.191112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.191139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.191217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.191247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.191358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.191384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.191475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.191504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.191594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.191623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.191714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.191742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.191819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.191845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.191936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.191963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.192059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.192087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.192177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.192203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.192284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.192310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.192394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.192423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.192505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.192532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.192621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.192649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.192728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.192754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.192875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.192912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.193005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.193032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.193115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.193142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.193264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.193289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.193375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.193401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.193490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.193521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.193603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.193629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.193706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.193732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.193818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.193846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.193949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.193976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.194055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.194082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.194171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.194204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.194318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.194345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.194431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.194457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.194538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.194566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.194645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.194671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.194754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.194782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.194874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.194902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.194981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.195007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.195099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.195126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.195203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.195237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.195321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.195347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.195432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.195458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.195543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.195570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.195661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.195696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.195801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.195850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.195950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.195978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.196060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.393 [2024-07-12 11:28:55.196086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.393 qpair failed and we were unable to recover it. 00:24:29.393 [2024-07-12 11:28:55.196241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.196267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.196363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.196389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.196480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.196506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.196595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.196621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.196710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.196736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.196827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.196855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.196947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.196981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.197093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.197119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.197204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.197230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.197323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.197350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.197428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.197460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.197564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.197590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.197682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.197708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.197806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.197847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.197955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.197983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.198061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.198087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.198172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.198198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.198282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.198309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.198394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.198420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.198521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.198548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.198642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.198672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.198755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.198782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.198880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.198908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.198995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.199022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.199125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.199151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.199234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.199261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.199344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.199371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.199457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.199484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.199568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.199596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.199680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.199707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.199786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.199813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.199900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.199927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.200017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.200043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.200122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.200158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.200252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.200279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.200394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.200422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.200507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.200534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.200649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.200677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.200755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.200781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.200863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.200895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.200974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.201000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.201080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.201108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.201189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.201215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.201300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.201334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.201440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.201466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.201556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.201583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.201664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.201690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.201768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.201795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.201885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.201921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.201998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.202024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.202110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.202137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.202228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.202254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.202338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.202364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.202452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.202478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.202592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.202618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.202698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.202726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.202840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.202880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.202969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.202995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.203077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.203103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.203177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.203203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.203282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.203309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.203386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.203413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.203491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.203520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.203607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.203637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.203725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.203752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.203826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.203852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.203953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.203979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.204054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.204080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.204197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.204224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.204307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.204333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.204421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.204450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.204562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.204589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.204699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.204724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.204804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.204830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.204927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.204953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.205030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.205056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.205155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.205183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.205294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.205324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.205400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.394 [2024-07-12 11:28:55.205426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.394 qpair failed and we were unable to recover it. 00:24:29.394 [2024-07-12 11:28:55.205508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.205535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.205614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.205642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.205727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.205756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.205838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.205870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.205981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.206007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.206087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.206112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.206222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.206248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.206332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.206357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.206477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.206505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.206587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.206615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.206728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.206756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.206838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.206871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.206997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.207023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.207099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.207125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.207210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.207238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.207318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.207344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.207451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.207476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.207548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.207574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.207701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.207741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.207828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.207856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.207951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.207980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.208066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.208092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.208202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.208228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.208312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.208340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.208447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.208475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.208561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.208596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.208745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.208772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.208853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.208885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.208963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.208989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.209082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.209109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.209191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.209218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.209306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.209333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.209419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.209447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.209535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.209563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.209641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.209668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.209753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.209780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.209894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.209922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.209999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.210025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.210111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.210137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.210223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.210249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.210330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.210356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.210438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.210466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.210557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.210588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.210687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.210714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.210797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.210823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.210911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.210938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.211017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.211043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.211154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.211181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.211265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.211292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.211428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.211455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.211536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.211564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.211691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.211717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.211820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.211851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.211958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.211985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.212075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.212103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.212191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.212219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.212332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.212358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.212437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.212463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.212548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.212577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.212660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.212687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.212773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.212801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.212909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.212937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.213024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.213050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.213163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.213189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.213266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.213293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.213373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.213399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.213488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.213515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.213593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.213619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.213732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.213759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.213838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.213875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.213954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.395 [2024-07-12 11:28:55.213980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.395 qpair failed and we were unable to recover it. 00:24:29.395 [2024-07-12 11:28:55.214082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.214109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.214192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.214219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.214297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.214323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.214441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.214470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.214584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.214610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.214690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.214717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.214858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.214891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.214975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.215002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.215097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.215124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.215199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.215225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.215331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.215357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.215439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.215468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.215582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.215610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.215699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.215727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.215810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.215837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.215923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.215950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.216034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.216062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.216180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.216207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.216285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.216311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.216411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.216438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.216585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.216613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.216693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.216726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.216815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.216842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.216939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.216967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.217057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.217083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.217161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.217187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.217264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.217290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.217372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.217398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.217485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.217511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.217587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.217613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.217709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.217748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.217836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.217863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.217987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.218013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.218092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.218118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.218267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.218296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.218444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.218471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.218554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.218582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.218667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.218693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.218774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.218801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.218886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.218914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.219001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.219027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.219116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.219142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.219218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.219244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.219361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.219387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.219476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.219501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.219585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.219612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.219727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.219752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.219849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.219895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.219995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.220035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.220138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.220178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.220295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.220322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.220404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.220431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.220518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.220544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.220623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.220649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.220724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.220749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.220829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.220855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.220985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.221013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.221105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.221135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.221254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.221282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.221361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.221388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.221477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.221504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.221595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.221623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.221743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.221770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.221859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.221892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.221972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.221999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.222078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.222105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.222188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.222215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.222291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.222317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.222414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.222443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.222558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.222585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.222712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.222752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.222833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.222861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.222947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.222974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.223091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.223117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.223197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.223223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.223342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.223370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.223452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.223480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.223594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.223622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.223696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.396 [2024-07-12 11:28:55.223722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.396 qpair failed and we were unable to recover it. 00:24:29.396 [2024-07-12 11:28:55.223795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.223821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.223906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.223933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.224065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.224091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.224207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.224233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.224312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.224338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.224424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.224453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.224543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.224572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.224699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.224740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.224824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.224852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.224969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.225000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.225088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.225115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.225220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.225246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.225359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.225384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.225471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.225499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.225577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.225604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.225693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.225722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.225836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.225863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.225950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.225976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.226082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.226108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.226248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.226274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.226391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.226420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.226535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.226562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.226679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.226707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.226788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.226814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.226904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.226932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.227023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.227050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.227159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.227188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.227313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.227340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.227449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.227478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.227562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.227590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.227706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.227733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.227812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.227838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.227931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.227958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.228033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.228059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.228140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.228166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.228271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.228298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.228406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.228438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.228550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.228577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.228665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.228706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.228791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.228818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.228920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.228949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.229042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.229068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.229151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.229177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.229282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.229309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.229386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.229413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.229495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.229521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.229634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.229662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.229757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.229786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.229874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.229902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.229986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.230013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.230132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.230158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.230234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.230260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.230346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.230374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.230481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.230509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.230597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.230626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.230703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.230730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.230810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.230837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.230924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.230951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.231040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.231068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.231154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.231181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.231296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.231322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.231407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.231435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.231514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.231541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.231666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.231706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.231802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.231831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.231934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.231961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.232044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.232070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.232179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.232205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.232285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.232311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.232402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.232430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.232513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.397 [2024-07-12 11:28:55.232542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.397 qpair failed and we were unable to recover it. 00:24:29.397 [2024-07-12 11:28:55.232627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.232655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.232737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.232763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.232877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.232904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.233019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.233046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.233133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.233160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.233238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.233269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.233352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.233378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.233459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.233485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.233596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.233622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.233730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.233755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.233833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.233859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.233957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.233984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.234061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.234087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.234169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.234195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.234303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.234329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.234419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.234459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.234551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.234580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.234661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.234688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.234796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.234822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.234944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.234971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.235081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.235108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.235183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.235209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.235317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.235343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.235423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.235449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.235558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.235584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.235660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.235686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.235800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.235826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.235916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.235943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.236023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.236049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.236140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.236167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.236258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.236285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.236369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.236397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.236505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.236536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.236647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.236673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.236772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.236813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.236950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.236990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.237079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.237107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.237187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.237213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.237305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.237331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.237444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.237470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.237551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.237578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.237658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.237684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.237767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.237794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.237889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.237917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.238010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.238037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.238119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.238145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.238229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.238255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.238348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.238388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.238501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.238529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.238610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.238637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.238743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.238769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.238896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.238936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.239032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.239061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.239166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.239193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.239305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.239330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.239417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.239443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.239522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.239549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.239662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.239688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.239772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.239801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.239883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.239916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.240006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.240034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.240122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.240149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.240236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.240263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.240379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.240406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.240488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.240516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.240596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.240622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.240731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.240757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.240837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.240863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.240985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.241014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.241094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.241121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.241199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.241226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.241306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.241332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.241415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.241443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.241562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.241590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.241670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.241698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.241781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.241807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.241925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.241953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.242031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.398 [2024-07-12 11:28:55.242057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.398 qpair failed and we were unable to recover it. 00:24:29.398 [2024-07-12 11:28:55.242199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.242226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.242341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.242367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.242452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.242479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.242587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.242613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.242718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.242745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.242854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.242889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.243005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.243032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.243110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.243136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.243263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.243291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.243404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.243430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.243568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.243595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.243688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.243727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.243812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.243840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.243971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.244010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.244098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.244126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.244233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.244260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.244370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.244396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.244510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.244538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.244626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.244655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.244763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.244803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.244920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.244950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.245028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.245062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.245152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.245178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.245304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.245330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.245414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.245442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.245550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.245577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.245655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.245681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.245765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.245793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.245909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.245937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.246046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.246086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.246194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.246222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.246335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.246361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.246436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.246462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.246582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.246610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.246698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.246738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.246898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.246926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.247013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.247040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.247135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.247163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.247273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.247300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.247406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.247434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.247522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.247550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.247645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.247684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.247805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.247833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.247952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.247979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.248067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.248094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.248207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.248233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.248320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.248346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.248431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.248458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.248542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.248574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.248662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.248691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.248771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.248797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.248885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.248912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.248995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.249022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.249140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.249167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.249252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.249279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.249398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.249427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.249504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.249531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.249607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.249633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.249742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.249769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.249848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.249888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.250004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.250030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.250144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.250170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.250259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.250284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.250395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.250421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.250566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.250594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.250715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.250744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.250829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.250855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.250951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.250978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.251091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.251118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.251192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.251219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.251331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.251358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.251471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.251498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.251595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.251635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.251718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.251747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.251862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.251897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.251985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.252012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.252091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.252118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.252226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.252252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.399 qpair failed and we were unable to recover it. 00:24:29.399 [2024-07-12 11:28:55.252372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.399 [2024-07-12 11:28:55.252401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.252516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.252543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.252682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.252709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.252798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.252824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.252907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.252933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.253015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.253040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.253152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.253178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.253256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.253281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.253391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.253418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.253507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.253534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.253613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.253639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.253721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.253750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.253884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.253913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.253999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.254027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.254104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.254130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.254241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.254267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.254359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.254399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.254518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.254546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.254653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.254679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.254760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.254785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.254864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.254899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.254980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.255006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.255091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.255117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.255228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.255254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.255334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.255360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.255439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.255467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.255549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.255575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.255658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.255686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.255773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.255799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.255885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.255916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.256017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.256046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.256124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.256152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.256236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.256262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.256378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.256404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.256485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.256511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.256596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.256622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.256702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.256729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.256815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.256847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.256950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.256976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.257066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.257092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.257205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.257231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.257314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.257344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.257454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.257481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.257560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.257586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.257663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.257688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.257771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.257798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.257887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.257916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.257997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.258024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.258139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.258165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.258280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.258307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.258422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.258449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.258572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.258600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.258693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.258721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.258806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.258834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.258932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.258960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.259044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.259070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.259152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.259178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.259285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.259311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.259401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.259426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.259519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.259545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.259625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.259651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.259770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.259799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.259896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.259925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.260034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.260061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.260142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.260174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.260258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.260285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.260367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.260393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.260545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.260572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.260651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.260677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.260761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.260787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.260903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.260931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.261016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.261043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.400 qpair failed and we were unable to recover it. 00:24:29.400 [2024-07-12 11:28:55.261122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.400 [2024-07-12 11:28:55.261148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.261228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.261256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.261337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.261363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.261473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.261500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.261587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.261614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.261725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.261754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.261889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.261929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.262025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.262054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.262166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.262192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.262270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.262297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.262380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.262408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.262495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.262521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.262617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.262658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.262742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.262770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.262889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.262918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.263030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.263057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.263136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.263163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.263249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.263276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.263392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.263419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.263506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.263535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.263619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.263645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.263722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.263748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.263836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.263862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.263965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.263993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.264079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.264105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.264185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.264211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.264296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.264322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.264405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.264435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.264553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.264582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.264697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.264724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.264799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.264826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.264918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.264945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.265059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.265091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.265204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.265231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.265312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.265339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.265454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.265482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.265621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.265647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.265760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.265786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.265870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.265897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.265990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.266018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.266109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.266136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.266224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.266250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.266330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.266356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.266437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.266463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.266575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.266603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.266715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.266743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.266877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.266917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.267040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.267068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.267158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.267183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.267267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.267293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.267378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.267406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.267524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.267552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.267663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.267689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.267796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.267822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.267916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.267944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.268025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.268052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.268125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.268152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.268267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.268293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.268373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.268400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.268511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.268541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.268655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.268683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.268768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.268797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.268898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.268938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.269035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.269062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.269202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.269229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.269343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.269371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.269462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.269489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.269574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.269602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.269691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.269717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.269799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.269825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.269946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.269973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.270051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.270078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.270164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.270190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.270286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.270314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.270431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.270459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.270573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.270600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.270680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.270707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.270824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.270850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.401 qpair failed and we were unable to recover it. 00:24:29.401 [2024-07-12 11:28:55.270945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.401 [2024-07-12 11:28:55.270972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.271050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.271077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.271158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.271185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.271299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.271326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.271415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.271443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.271527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.271553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.271677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.271717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.271839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.271879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.271966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.271992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.272070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.272104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.272213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.272240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.272324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.272353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.272434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.272461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.272576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.272605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.272691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.272718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.272806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.272834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.272935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.272962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.273061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.273088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.273232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.273260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.273343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.273369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.273445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.273471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.273551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.273581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.273659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.273685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.273768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.273796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.273881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.273908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.273998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.274024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.274111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.274137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.274228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.274256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.274333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.274359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.274445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.274473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.274613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.274640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.274788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.274827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.274927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.274956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.275076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.275105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.275188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.275214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.275298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.275324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.275400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.275427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.275535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.275560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.275652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.275692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.275777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.275805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.275897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.275925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.276012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.276038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.276179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.276206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.276286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.276311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.276388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.276414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.276492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.276518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.276604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.276632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.276754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.276782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.276872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.276903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.276986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.277012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.277091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.277117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.277197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.277222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.277342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.277368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.277448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.277474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.277546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.277572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.277646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.277671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.277752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.277778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.277906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.277946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.278033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.278061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.278178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.278205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.278285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.278311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.278419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.278445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.278534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.278563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.278677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.278704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.278827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.278856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.278948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.278974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.279063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.279090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.279195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.279221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.279326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.279352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.279434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.279462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.279546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.279574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.279662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.279690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.279773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.402 [2024-07-12 11:28:55.279801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.402 qpair failed and we were unable to recover it. 00:24:29.402 [2024-07-12 11:28:55.279886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.279913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.280003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.280028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.280110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.280140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.280223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.280250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.280364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.280391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.280481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.280507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.280632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.280661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.280747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.280773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.280860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.280894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.280974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.281003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.281095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.281123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.281197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.281223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.281308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.281336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.281416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.281442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.281552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.281579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.281665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.281691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.281775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.281801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.281911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.281939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.282019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.282045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.282126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.282153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.282226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.282253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.282376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.282404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.282493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.282521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.282612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.282652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.282743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.282769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.282850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.282883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.282965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.282992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.283101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.283128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.283209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.283236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.283355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.283384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.283525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.283554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.283638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.283664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.283748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.283776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.283857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.283889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.283968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.283994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.284082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.284107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.284218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.284244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.284333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.284359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.284466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.284492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.284606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.284632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.284705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.284731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.284846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.284880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.284962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.284989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.285073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.285099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.285208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.285234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.285310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.285337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.285416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.285444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.285528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.285555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.285648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.285688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.285806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.285835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.285931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.285958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.286046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.286073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.286157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.286183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.286263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.286289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.286368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.286395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.286474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.286500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.286595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.286623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.286709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.286735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.286815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.286841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.286934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.286962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.287080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.287106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.287194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.287220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.287327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.287354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.287459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.287485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.287566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.287592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.287668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.287694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.287773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.287801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.287884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.287912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.287997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.288025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.288109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.288140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.288248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.288275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.288348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.288374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.288455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.288483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.288606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.288646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.288775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.288815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.288911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.288939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.289031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.289059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.289147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.289175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.403 [2024-07-12 11:28:55.289255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.403 [2024-07-12 11:28:55.289282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.403 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.289365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.289393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.289482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.289508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.289597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.289627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.289705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.289733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.289830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.289856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.289949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.289975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.290052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.290078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.290194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.290220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.290296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.290322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.290407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.290436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.290566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.290606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.290689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.290716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.290797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.290825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.290914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.290941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.291030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.291056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.291149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.291176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.291261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.291287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.291375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.291406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.291487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.291514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.291655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.291680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.291766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.291795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.291882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.291919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.292005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.292031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.292113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.292139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.292216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.292242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.292322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.292351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.292461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.292488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.292598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.292624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.292739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.292765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.292850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.292881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.292969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.292995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.293080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.293108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.293194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.293220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.293296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.293322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.293397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.293423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.293509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.293537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.293618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.293647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.293730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.293757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.293874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.293901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.293982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.294008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.294119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.294145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.294240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.294266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.294376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.294402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.294492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.294520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.294603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.294633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.294771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.294798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.294905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.294933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.295014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.295041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.295184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.295210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.295291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.295317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.295395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.295423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.295504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.295533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.295631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.295659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.295738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.295765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.295878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.295905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.295993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.296020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.296108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.296134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.296242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.296268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.296354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.296380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.296470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.296498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.296577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.296603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.296744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.296771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.296857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.296892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.296979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.297006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.297094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.297134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.297228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.297256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.297380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.297421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.297515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.297541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.297616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.297642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.297748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.297774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.297857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.404 [2024-07-12 11:28:55.297891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.404 qpair failed and we were unable to recover it. 00:24:29.404 [2024-07-12 11:28:55.297981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.298010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.298107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.298135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.298225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.298251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.298329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.298355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.298474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.298500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.298582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.298608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.298687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.298714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.298832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.298860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.298951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.298978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.299064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.299092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.299202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.299229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.299314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.299344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.299430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.299457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.299539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.299568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.299687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.299714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.299794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.299820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.299903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.299930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.300022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.300048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.300160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.300186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.300276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.300302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.300413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.300440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.300523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.300550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.300668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.300695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.300775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.300801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.300889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.300916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.300997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.301023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.301108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.301134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.301224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.301250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.301327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.301353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.301437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.301465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.301563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.301603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.301719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.301746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.301832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.301858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.301946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.301972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.302047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.302073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.302156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.302184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.302267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.302295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.302380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.302409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.302521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.302548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.302621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.302647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.302758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.302790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.302877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.302905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.302992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.303019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.303099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.303125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.303245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.303273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.303366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.303394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.303478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.303506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.303589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.405 [2024-07-12 11:28:55.303616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.405 qpair failed and we were unable to recover it. 00:24:29.405 [2024-07-12 11:28:55.303698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.303725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.303808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.303834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.303921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.303948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.304025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.304052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.304169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.304196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.304278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.304305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.304388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.304414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.304529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.304557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.304675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.304703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.304822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.304862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.304994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.305021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.305100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.305126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.305202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.305229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.305313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.305339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.305419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.305447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.305576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.305617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.305717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.305768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.305884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.305914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.306004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.306031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.306131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.306161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.306245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.306273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.306385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.306412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.306499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.306527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.306639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.306666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.306779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.306806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.306890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.306917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.306999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.307025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.307102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.307129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.307242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.307270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.307395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.307424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.307540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.307566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.307648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.307676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.307787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.307819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.307964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.308005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.308100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.308127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.308208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.308236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.308327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.308354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.308444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.308472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.308612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.308638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.308725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.308753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.308844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.308889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.309000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.309027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.309103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.309127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.309241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.309266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.309351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.309376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.309491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.309520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.309605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.309633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.309728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.309767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.309847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.309877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.309967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.309994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.310081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.310107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.310184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.310210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.310299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.310325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.310415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.310443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.310525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.310552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.310631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.310657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.310744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.310770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.406 [2024-07-12 11:28:55.310878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.406 [2024-07-12 11:28:55.310919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.406 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.311014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.311041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.311127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.311159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.311247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.311273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.311354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.311380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.311471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.311498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.311610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.311636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.311712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.311738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.311820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.311848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.311945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.311973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.312061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.312087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.312168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.312195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.312271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.312296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.312381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.312407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.312522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.312548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.312788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.312814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.312898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.312926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.313009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.313036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.313119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.313145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.313231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.313258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.313373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.313400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 11:28:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:29.407 [2024-07-12 11:28:55.313493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.313522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.313607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 11:28:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:24:29.407 [2024-07-12 11:28:55.313634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.313721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.313749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 11:28:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:29.407 [2024-07-12 11:28:55.313831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.313858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.313944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.313971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 11:28:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.314055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.314082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 11:28:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:29.407 [2024-07-12 11:28:55.314169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.314196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.314276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.314303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.314422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.314451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.314536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.314563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.314642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.314669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.314761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.314787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.314883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.314924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.315014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.315041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.315128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.315167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.315293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.315320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.315400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.315426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.315509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.315536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.315616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.315642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.315728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.315757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.315838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.315873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.315956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.315982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.316071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.316097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.316178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.316204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.316289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.316318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.316400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.316427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.316535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.316563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.316644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.316670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.316749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.316776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.316888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.407 [2024-07-12 11:28:55.316916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.407 qpair failed and we were unable to recover it. 00:24:29.407 [2024-07-12 11:28:55.316995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.317021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.317101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.317128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.317208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.317235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.317349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.317375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.317455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.317481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.317591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.317618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.317699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.317725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.317819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.317851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.317954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.317982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.318067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.318095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.318211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.318239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.318337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.318367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.318465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.318505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.318625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.318653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.318733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.318759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.318848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.318881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.318960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.318991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.319082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.319109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.319184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.319211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.319292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.319318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.319433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.319459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.319538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.319565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.319645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.319672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.319748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.319775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.319861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.319896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.319984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.320011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.320092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.320119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.320226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.320253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.320359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.320386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.320468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.320494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.320586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.320615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.320712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.320739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.320857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.320889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.320970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.320997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.321090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.321117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.321227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.321253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.321334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.321360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.321440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.321467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.321547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.321573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.321646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.321673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.321784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.321811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.321912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.321952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.322051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.322079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.322179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.322206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.322292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.322318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.322402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.322429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.322513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.322541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.322655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.322683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.322765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.322791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.322881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.322909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.322991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.323018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.323117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.323145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.323226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.323253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.323332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.323361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.323444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.323471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.323545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.323571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.323682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.323713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.323800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.408 [2024-07-12 11:28:55.323826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.408 qpair failed and we were unable to recover it. 00:24:29.408 [2024-07-12 11:28:55.323955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.323984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.324068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.324094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.324174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.324200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.324315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.324343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.324427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.324454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.324535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.324562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.324647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.324673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.324756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.324782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.324862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.324896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.324978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.325005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.325091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.325119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.325203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.325229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.325308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.325334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.325419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.325449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.325530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.325558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.325654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.325693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.325801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.325828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.325921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.325949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.326036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.326063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.326175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.326201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.326289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.326316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.326399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.326427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.326515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.326544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.326627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.326654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.326737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.326763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.326846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.326879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.326965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.326992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.327071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.327098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.327190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.327216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.327299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.327326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.327442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.327470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.327552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.327579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.327660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.327687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.327775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.327802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.327920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.327949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.328037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.328064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.328149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.328177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.328287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.328314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.328401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.328432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.328547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.328575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.328655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.328685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.328771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.328798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.328882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.328909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.328993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.329019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.329110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.329136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.329219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.329246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.329328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.329354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.329450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.329482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.329603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.329643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.329736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.329764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.329851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.329886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.329976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.330002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.330089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.330115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.330195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.330221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.330312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.330339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.330424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.330450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.330529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.330555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.330634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.330661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.409 [2024-07-12 11:28:55.330755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.409 [2024-07-12 11:28:55.330795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.409 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.330884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.330913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.331003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.331030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.331117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.331144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.331226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.331252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.331331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.331358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.331432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.331458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.331538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.331569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.331652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.331678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.331766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.331793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.331886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.331913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.332026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.332053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.332139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.332165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.332244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.332271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.332356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.332382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.332459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.332486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.332573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.332613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.332720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.332753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.332833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.332862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.332953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.332980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.333063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.333090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.333189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.333216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.333303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.333330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.333406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.333433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.333524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.333558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.333647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.333675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.333757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.333784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.333871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.333899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.333984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.334013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.334093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.334120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.334235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.334262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.334338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.334364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.334444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.334470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.334559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.334587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 11:28:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:29.410 [2024-07-12 11:28:55.334676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.334706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 11:28:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:29.410 [2024-07-12 11:28:55.334802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.334843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.334947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.334976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.410 11:28:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.335063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.335090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 11:28:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:29.410 [2024-07-12 11:28:55.335172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.335200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.335279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.335307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.335395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.335423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.335497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.335524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.335608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.335635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.335712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.335738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.335817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.335843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.335962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.335992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.336069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.336095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.336176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.336202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.336283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.410 [2024-07-12 11:28:55.336309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.410 qpair failed and we were unable to recover it. 00:24:29.410 [2024-07-12 11:28:55.336390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.336416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.336506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.336533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.336618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.336652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.336732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.336757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.336835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.336874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.336967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.336994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.337116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.337145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.337243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.337269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.337354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.337380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.337474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.337501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.337591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.337618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.337699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.337725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.337809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.337836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.337929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.337955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.338036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.338061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.338136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.338161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.338275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.338304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.338388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.338414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.338491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.338518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.338598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.338625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.338699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.338724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.338815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.338841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.338926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.338953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.339051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.339090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.339176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.339203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.339293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.339319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.339404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.339430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.339506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.339532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.339613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.339640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.339721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.339747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.339856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.339889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.339969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.339995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.340087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.340116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.340232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.340258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.340376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.340402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.340508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.340534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.340611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.340637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.340729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.340758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.340839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.340873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.340964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.340989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.341067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.341093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.341182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.341208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.341294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.341319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.341400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.341425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.341505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.341531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.341610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.341636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.341727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.341756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.341836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.341871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.341968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.341995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.342079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.342105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.342194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.342223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.342309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.342335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.342453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.342481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.342569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.342595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.342676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.342703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.342819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.342847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.411 [2024-07-12 11:28:55.342939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.411 [2024-07-12 11:28:55.342966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.411 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.343053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.343086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.343184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.343213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.343293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.343320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.343462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.343489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.343572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.343599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.343678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.343711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.343801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.343829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.343943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.343971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.344057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.344083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.344168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.344196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.344306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.344333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.344422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.344449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.344558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.344585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.344671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.344700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.344801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.344841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.344942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.344973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.345058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.345085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.345174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.345201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.345287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.345314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.345398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.345428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.345524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.345552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.345645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.345674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.345780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.345809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.345894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.345921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.346002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.346029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.346146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.346172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.346284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.346310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.346397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.346425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.346514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.346540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.346618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.346645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.346732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.346760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.346860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.346906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.347001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.347029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.347150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.347182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.347268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.347294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.347383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.347409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.347502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.347527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.347620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.347646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.347739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.347765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.347847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.347882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.347979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.348006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.348118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.348145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.348229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.348256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.348354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.348395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.348484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.348512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.348594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.348621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.348707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.348733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.348818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.348849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.348946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.348975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.349053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.349080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.349166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.349192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.349282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.349314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.349430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.349458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.349541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.349568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.349650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.349676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.349759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.349784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.349930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.349957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.350033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.350059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.412 [2024-07-12 11:28:55.350146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.412 [2024-07-12 11:28:55.350173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.412 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.350247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.350273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.350352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.350382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.350476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.350502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.350596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.350622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.350700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.350726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.350845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.350888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.350991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.351031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.351128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.351168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.351258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.351285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.351367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.351393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.351481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.351508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.351598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.351624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.351715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.351740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.351861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.351895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.351984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.352011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.352102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.352128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.352215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.352240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.352327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.352353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.352429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.352454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.352573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.352603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.352702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.352742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.352832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.352859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.352959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.352985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.353085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.353111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.353195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.353221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.353312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.353340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.353439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.353468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.353565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.353593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.353682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.353709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.353787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.353813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.353892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.353920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.354001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.354027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.354152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.354178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.354258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.354284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.354375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.354404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.354494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.354522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.354617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.354646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.354734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.354763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.354846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.354879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.354968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.354994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.355081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.355108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.355230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.355256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.355353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.355380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.355470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.355496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.355577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.355603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.355735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.355776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.355885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.355914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.356001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.356028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.356110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.356137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.356258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.356285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.356370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.356398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.356482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.356509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.356602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.356628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.356728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.356756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.356838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.356874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.413 [2024-07-12 11:28:55.356969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.413 [2024-07-12 11:28:55.356995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.413 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.357107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.357134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.357249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.357276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.357361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.357387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.357473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 Malloc0 00:24:29.414 [2024-07-12 11:28:55.357499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.357592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.357632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.357740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.357780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 11:28:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.414 [2024-07-12 11:28:55.357887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.357915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.358000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 11:28:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:29.414 [2024-07-12 11:28:55.358027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 11:28:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.414 [2024-07-12 11:28:55.358225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.358252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 11:28:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:29.414 [2024-07-12 11:28:55.358336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.358363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.358449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.358476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.358565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.358595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.358688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.358714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.358796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.358823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.358922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.358949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.359041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.359068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.359157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.359183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.359263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.359289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.359366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.359391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.359469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.359495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.359580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.359606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.359685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.359711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.359787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.359813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.359902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.359930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.360023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.360058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.360154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.360183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.360270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.360297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.360388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.360416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.360539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.360565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.360646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.360673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.360755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.360781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.360870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.360897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.360977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.361003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.361083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.361109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.361186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.361212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.361223] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:29.414 [2024-07-12 11:28:55.361295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.361320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.361400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.361424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.361504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.361533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.361613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.361638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.361718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.361742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.361823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.361848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.361961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.361989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.362069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.362094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.362183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.362210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.362291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.362317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.362407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.362439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.362587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.362620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.362715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.362743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.362828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.362854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.362949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.362978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.363067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.363093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.363190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.363217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.363299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.363325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.363402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.363427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.414 qpair failed and we were unable to recover it. 00:24:29.414 [2024-07-12 11:28:55.363506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.414 [2024-07-12 11:28:55.363532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.363614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.363640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.363721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.363747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.363828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.363854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.363954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.363981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.364064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.364091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.364176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.364204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.364285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.364312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.364394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.364421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.364507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.364534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.364613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.364656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.364742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.364769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.364870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.364897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.364982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.365010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.365095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.365122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.365199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.365225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.365342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.365367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.365452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.365479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.365561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.365587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.365682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.365722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.365820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.365854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.365962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.365989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.366079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.366106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.366191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.366221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.366343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.366371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.366465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.366493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.366585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.366614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.366704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.366731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.366822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.366848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.366940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.366966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.367045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.367071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.367154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.367180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.367299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.367328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.367413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.367439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.367525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.367552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.367630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.367656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.367739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.367766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.367859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.367895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.367984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.368011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.368103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.368131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.368244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.368270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.368352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.368378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.368455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.368481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.368575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.368600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.368681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.368707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.368795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.368825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.368919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.368948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.369030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.369057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.369138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.369164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.369254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.369280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.369370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.369396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 11:28:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.415 [2024-07-12 11:28:55.369494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.369521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 11:28:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:29.415 [2024-07-12 11:28:55.369602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.369629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 11:28:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.415 [2024-07-12 11:28:55.369726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.369766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 11:28:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:29.415 [2024-07-12 11:28:55.369863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.369901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.415 [2024-07-12 11:28:55.369986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.415 [2024-07-12 11:28:55.370012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.415 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.370102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.370128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.370214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.370240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.370322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.370349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.370430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.370458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.370550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.370591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.370716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.370749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.370835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.370876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.370964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.370991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.371084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.371112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.371205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.371232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.371319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.371348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.371441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.371472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.371556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.371583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.371662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.371689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.371769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.371794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.371883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.371911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.371993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.372020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.372111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.372137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.372246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.372273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.372357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.372385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.372476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.372505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.372598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.372624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.372716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.372756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.372845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.372879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.372970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.372996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.373072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.373098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.373205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.373231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.373313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.373339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.373418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.373444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.373530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.373559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.373647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.373673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.373795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.373823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.373919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.373946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.374041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.374072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.374160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.374190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.374275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.374302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.374383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.374416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.374505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.374533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.374616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.374642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.374744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.374771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.374855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.374889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.374971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.374997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.375079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.375105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.375185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.375212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.375332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.375358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.375442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.375468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.375553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.375580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.375662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.375688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.416 [2024-07-12 11:28:55.375770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.416 [2024-07-12 11:28:55.375796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.416 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.375879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.375906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.375998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.376024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.376105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.376131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.376211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.376237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.376315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.376341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.376461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.376488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.376570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.376596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.376675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.376701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.376794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.376823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.376914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.376942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.377026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.377053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.377141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.377168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.377254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.377280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.377366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.377393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.377487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 11:28:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.417 [2024-07-12 11:28:55.377514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.377603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.377630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 11:28:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.377713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.377740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 11:28:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.417 [2024-07-12 11:28:55.377823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.377850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 11:28:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.377949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.377976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.378066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.378093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.378178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.378207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.378292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.378325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.378420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.378452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.378536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.378563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.378649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.378675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.378761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.378788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.378883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.378912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.379004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.379031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.379112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.379139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.379224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.379251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.379333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.379359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.379440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.379466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.379549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.379576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.379661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.379687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.379797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.379824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.379937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.379964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.380056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.380083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.380193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.380219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.380299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.380326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.380441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.380470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.380553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.380579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.380662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.380688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.380793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.380818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.380900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.380927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.381022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.381048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.381161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.381189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.381285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.381312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.381398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.381424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.381536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.381563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.381653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.381681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.381768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.381795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.381910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.381937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.382028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.382055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.382142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.382169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.382256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.382284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.382372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.382398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.382486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.417 [2024-07-12 11:28:55.382512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.417 qpair failed and we were unable to recover it. 00:24:29.417 [2024-07-12 11:28:55.382598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.382624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.382706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.382732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.382817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.382843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.382941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.382968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.383048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.383074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.383157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.383188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.383275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.383303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.383391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.383417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.383507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.383533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.383614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.383640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.383762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.383802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.383901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.383931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.384053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.384081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.384163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.384190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.384300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.384328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.384407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.384435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.384552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.384581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.384666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.384696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.384791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.384818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.384922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.384949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.385037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.385063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.385144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.385170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.385262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.385287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.385372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.385400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.385484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 11:28:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.418 [2024-07-12 11:28:55.385510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.385590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.385616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 11:28:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:29.418 [2024-07-12 11:28:55.385701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.385727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 11:28:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.418 [2024-07-12 11:28:55.385845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.385876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 11:28:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:29.418 [2024-07-12 11:28:55.385958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.385984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.386069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.386095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.386181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.386206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.386288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.386315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.386413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.386440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.386523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.386549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.386643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.386683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.386805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.386832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.386918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.386945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.387029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.387056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.387143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.387170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.387260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.387287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.387400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.387427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.387510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.387540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.387634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.387663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e0000b90 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.387747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.387773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.387864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.387899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.387990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.388016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.388099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.388125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.388213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.388240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.388319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.388345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.388423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.388449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.388525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.388552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f200 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.388647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.388675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.388757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.388783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0e8000b90 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.388870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.388900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.388988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.389016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.389097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.389123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.389206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.389234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.389325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.418 [2024-07-12 11:28:55.389357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa0d8000b90 with addr=10.0.0.2, port=4420 00:24:29.418 qpair failed and we were unable to recover it. 00:24:29.418 [2024-07-12 11:28:55.389659] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:29.418 [2024-07-12 11:28:55.391949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.418 [2024-07-12 11:28:55.392068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.419 [2024-07-12 11:28:55.392094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.419 [2024-07-12 11:28:55.392109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.419 [2024-07-12 11:28:55.392123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.419 [2024-07-12 11:28:55.392157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.419 qpair failed and we were unable to recover it. 00:24:29.419 11:28:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.419 11:28:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:29.419 11:28:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.419 11:28:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:29.419 11:28:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.419 11:28:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 680693 00:24:29.419 [2024-07-12 11:28:55.401786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.419 [2024-07-12 11:28:55.401890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.419 [2024-07-12 11:28:55.401916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.419 [2024-07-12 11:28:55.401931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.419 [2024-07-12 11:28:55.401944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.419 [2024-07-12 11:28:55.401974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.419 qpair failed and we were unable to recover it. 00:24:29.419 [2024-07-12 11:28:55.411788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.419 [2024-07-12 11:28:55.411885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.419 [2024-07-12 11:28:55.411911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.419 [2024-07-12 11:28:55.411925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.419 [2024-07-12 11:28:55.411950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.419 [2024-07-12 11:28:55.411980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.419 qpair failed and we were unable to recover it. 00:24:29.419 [2024-07-12 11:28:55.421838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.419 [2024-07-12 11:28:55.421950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.419 [2024-07-12 11:28:55.421981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.419 [2024-07-12 11:28:55.421997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.419 [2024-07-12 11:28:55.422010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.419 [2024-07-12 11:28:55.422053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.419 qpair failed and we were unable to recover it. 00:24:29.419 [2024-07-12 11:28:55.431827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.419 [2024-07-12 11:28:55.431931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.419 [2024-07-12 11:28:55.431957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.419 [2024-07-12 11:28:55.431972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.419 [2024-07-12 11:28:55.431989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.419 [2024-07-12 11:28:55.432019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.419 qpair failed and we were unable to recover it. 00:24:29.419 [2024-07-12 11:28:55.441812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.419 [2024-07-12 11:28:55.441915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.419 [2024-07-12 11:28:55.441941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.419 [2024-07-12 11:28:55.441957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.419 [2024-07-12 11:28:55.441976] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.419 [2024-07-12 11:28:55.442006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.419 qpair failed and we were unable to recover it. 00:24:29.419 [2024-07-12 11:28:55.451878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.419 [2024-07-12 11:28:55.451971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.419 [2024-07-12 11:28:55.451996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.419 [2024-07-12 11:28:55.452011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.419 [2024-07-12 11:28:55.452024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.419 [2024-07-12 11:28:55.452067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.419 qpair failed and we were unable to recover it. 00:24:29.419 [2024-07-12 11:28:55.461834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.419 [2024-07-12 11:28:55.461941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.419 [2024-07-12 11:28:55.461968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.419 [2024-07-12 11:28:55.461982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.419 [2024-07-12 11:28:55.462006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.419 [2024-07-12 11:28:55.462041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.419 qpair failed and we were unable to recover it. 00:24:29.419 [2024-07-12 11:28:55.471888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.419 [2024-07-12 11:28:55.471987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.419 [2024-07-12 11:28:55.472012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.419 [2024-07-12 11:28:55.472028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.419 [2024-07-12 11:28:55.472041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.419 [2024-07-12 11:28:55.472070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.419 qpair failed and we were unable to recover it. 00:24:29.679 [2024-07-12 11:28:55.481955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.679 [2024-07-12 11:28:55.482046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.679 [2024-07-12 11:28:55.482071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.679 [2024-07-12 11:28:55.482086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.679 [2024-07-12 11:28:55.482099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.679 [2024-07-12 11:28:55.482129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.679 qpair failed and we were unable to recover it. 00:24:29.679 [2024-07-12 11:28:55.491923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.679 [2024-07-12 11:28:55.492014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.679 [2024-07-12 11:28:55.492038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.679 [2024-07-12 11:28:55.492053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.679 [2024-07-12 11:28:55.492065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.679 [2024-07-12 11:28:55.492095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.679 qpair failed and we were unable to recover it. 00:24:29.679 [2024-07-12 11:28:55.501945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.679 [2024-07-12 11:28:55.502037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.679 [2024-07-12 11:28:55.502063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.679 [2024-07-12 11:28:55.502077] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.679 [2024-07-12 11:28:55.502090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.679 [2024-07-12 11:28:55.502120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.679 qpair failed and we were unable to recover it. 00:24:29.679 [2024-07-12 11:28:55.512040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.679 [2024-07-12 11:28:55.512122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.679 [2024-07-12 11:28:55.512153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.679 [2024-07-12 11:28:55.512169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.679 [2024-07-12 11:28:55.512183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.679 [2024-07-12 11:28:55.512212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.679 qpair failed and we were unable to recover it. 00:24:29.679 [2024-07-12 11:28:55.522055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.679 [2024-07-12 11:28:55.522142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.679 [2024-07-12 11:28:55.522167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.679 [2024-07-12 11:28:55.522184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.679 [2024-07-12 11:28:55.522197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.679 [2024-07-12 11:28:55.522226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.679 qpair failed and we were unable to recover it. 00:24:29.679 [2024-07-12 11:28:55.532069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.679 [2024-07-12 11:28:55.532157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.679 [2024-07-12 11:28:55.532183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.679 [2024-07-12 11:28:55.532197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.679 [2024-07-12 11:28:55.532211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.679 [2024-07-12 11:28:55.532240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.679 qpair failed and we were unable to recover it. 00:24:29.679 [2024-07-12 11:28:55.542065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.679 [2024-07-12 11:28:55.542157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.679 [2024-07-12 11:28:55.542182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.679 [2024-07-12 11:28:55.542197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.679 [2024-07-12 11:28:55.542211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.679 [2024-07-12 11:28:55.542240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.679 qpair failed and we were unable to recover it. 00:24:29.679 [2024-07-12 11:28:55.552148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.679 [2024-07-12 11:28:55.552242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.679 [2024-07-12 11:28:55.552267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.679 [2024-07-12 11:28:55.552281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.679 [2024-07-12 11:28:55.552299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.680 [2024-07-12 11:28:55.552330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.680 qpair failed and we were unable to recover it. 00:24:29.680 [2024-07-12 11:28:55.562300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.680 [2024-07-12 11:28:55.562401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.680 [2024-07-12 11:28:55.562426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.680 [2024-07-12 11:28:55.562441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.680 [2024-07-12 11:28:55.562454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.680 [2024-07-12 11:28:55.562484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.680 qpair failed and we were unable to recover it. 00:24:29.680 [2024-07-12 11:28:55.572201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.680 [2024-07-12 11:28:55.572306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.680 [2024-07-12 11:28:55.572331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.680 [2024-07-12 11:28:55.572346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.680 [2024-07-12 11:28:55.572359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.680 [2024-07-12 11:28:55.572388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.680 qpair failed and we were unable to recover it. 00:24:29.680 [2024-07-12 11:28:55.582209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.680 [2024-07-12 11:28:55.582300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.680 [2024-07-12 11:28:55.582325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.680 [2024-07-12 11:28:55.582340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.680 [2024-07-12 11:28:55.582354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.680 [2024-07-12 11:28:55.582383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.680 qpair failed and we were unable to recover it. 00:24:29.680 [2024-07-12 11:28:55.592273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.680 [2024-07-12 11:28:55.592367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.680 [2024-07-12 11:28:55.592392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.680 [2024-07-12 11:28:55.592408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.680 [2024-07-12 11:28:55.592421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.680 [2024-07-12 11:28:55.592450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.680 qpair failed and we were unable to recover it. 00:24:29.680 [2024-07-12 11:28:55.602272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.680 [2024-07-12 11:28:55.602358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.680 [2024-07-12 11:28:55.602384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.680 [2024-07-12 11:28:55.602399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.680 [2024-07-12 11:28:55.602412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.680 [2024-07-12 11:28:55.602441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.680 qpair failed and we were unable to recover it. 00:24:29.680 [2024-07-12 11:28:55.612274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.680 [2024-07-12 11:28:55.612374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.680 [2024-07-12 11:28:55.612399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.680 [2024-07-12 11:28:55.612413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.680 [2024-07-12 11:28:55.612427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.680 [2024-07-12 11:28:55.612456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.680 qpair failed and we were unable to recover it. 00:24:29.680 [2024-07-12 11:28:55.622298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.680 [2024-07-12 11:28:55.622388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.680 [2024-07-12 11:28:55.622413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.680 [2024-07-12 11:28:55.622428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.680 [2024-07-12 11:28:55.622441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.680 [2024-07-12 11:28:55.622470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.680 qpair failed and we were unable to recover it. 00:24:29.680 [2024-07-12 11:28:55.632356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.680 [2024-07-12 11:28:55.632449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.680 [2024-07-12 11:28:55.632475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.680 [2024-07-12 11:28:55.632490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.680 [2024-07-12 11:28:55.632503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.680 [2024-07-12 11:28:55.632532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.680 qpair failed and we were unable to recover it. 00:24:29.680 [2024-07-12 11:28:55.642377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.680 [2024-07-12 11:28:55.642470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.680 [2024-07-12 11:28:55.642495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.680 [2024-07-12 11:28:55.642516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.680 [2024-07-12 11:28:55.642530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.680 [2024-07-12 11:28:55.642560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.680 qpair failed and we were unable to recover it. 00:24:29.680 [2024-07-12 11:28:55.652395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.680 [2024-07-12 11:28:55.652487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.680 [2024-07-12 11:28:55.652513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.680 [2024-07-12 11:28:55.652528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.680 [2024-07-12 11:28:55.652541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.680 [2024-07-12 11:28:55.652570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.680 qpair failed and we were unable to recover it. 00:24:29.680 [2024-07-12 11:28:55.662410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.680 [2024-07-12 11:28:55.662502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.680 [2024-07-12 11:28:55.662527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.680 [2024-07-12 11:28:55.662542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.680 [2024-07-12 11:28:55.662555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.680 [2024-07-12 11:28:55.662584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.680 qpair failed and we were unable to recover it. 00:24:29.680 [2024-07-12 11:28:55.672546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.680 [2024-07-12 11:28:55.672649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.680 [2024-07-12 11:28:55.672678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.680 [2024-07-12 11:28:55.672693] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.680 [2024-07-12 11:28:55.672707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.680 [2024-07-12 11:28:55.672737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.680 qpair failed and we were unable to recover it. 00:24:29.680 [2024-07-12 11:28:55.682499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.680 [2024-07-12 11:28:55.682584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.680 [2024-07-12 11:28:55.682608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.680 [2024-07-12 11:28:55.682623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.680 [2024-07-12 11:28:55.682635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.680 [2024-07-12 11:28:55.682664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.681 qpair failed and we were unable to recover it. 00:24:29.681 [2024-07-12 11:28:55.692523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.681 [2024-07-12 11:28:55.692637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.681 [2024-07-12 11:28:55.692661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.681 [2024-07-12 11:28:55.692676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.681 [2024-07-12 11:28:55.692690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.681 [2024-07-12 11:28:55.692721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.681 qpair failed and we were unable to recover it. 00:24:29.681 [2024-07-12 11:28:55.702523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.681 [2024-07-12 11:28:55.702612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.681 [2024-07-12 11:28:55.702637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.681 [2024-07-12 11:28:55.702651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.681 [2024-07-12 11:28:55.702664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.681 [2024-07-12 11:28:55.702693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.681 qpair failed and we were unable to recover it. 00:24:29.681 [2024-07-12 11:28:55.712579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.681 [2024-07-12 11:28:55.712689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.681 [2024-07-12 11:28:55.712715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.681 [2024-07-12 11:28:55.712730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.681 [2024-07-12 11:28:55.712743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.681 [2024-07-12 11:28:55.712772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.681 qpair failed and we were unable to recover it. 00:24:29.681 [2024-07-12 11:28:55.722599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.681 [2024-07-12 11:28:55.722691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.681 [2024-07-12 11:28:55.722715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.681 [2024-07-12 11:28:55.722730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.681 [2024-07-12 11:28:55.722743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.681 [2024-07-12 11:28:55.722772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.681 qpair failed and we were unable to recover it. 00:24:29.681 [2024-07-12 11:28:55.732602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.681 [2024-07-12 11:28:55.732684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.681 [2024-07-12 11:28:55.732709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.681 [2024-07-12 11:28:55.732729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.681 [2024-07-12 11:28:55.732743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.681 [2024-07-12 11:28:55.732772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.681 qpair failed and we were unable to recover it. 00:24:29.681 [2024-07-12 11:28:55.742653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.681 [2024-07-12 11:28:55.742752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.681 [2024-07-12 11:28:55.742780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.681 [2024-07-12 11:28:55.742797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.681 [2024-07-12 11:28:55.742810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.681 [2024-07-12 11:28:55.742840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.681 qpair failed and we were unable to recover it. 00:24:29.681 [2024-07-12 11:28:55.752672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.681 [2024-07-12 11:28:55.752757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.681 [2024-07-12 11:28:55.752782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.681 [2024-07-12 11:28:55.752797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.681 [2024-07-12 11:28:55.752811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.681 [2024-07-12 11:28:55.752853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.681 qpair failed and we were unable to recover it. 00:24:29.681 [2024-07-12 11:28:55.762784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.681 [2024-07-12 11:28:55.762888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.681 [2024-07-12 11:28:55.762916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.681 [2024-07-12 11:28:55.762933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.681 [2024-07-12 11:28:55.762946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.681 [2024-07-12 11:28:55.762977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.681 qpair failed and we were unable to recover it. 00:24:29.681 [2024-07-12 11:28:55.772728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.681 [2024-07-12 11:28:55.772808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.681 [2024-07-12 11:28:55.772833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.681 [2024-07-12 11:28:55.772847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.681 [2024-07-12 11:28:55.772860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.681 [2024-07-12 11:28:55.772899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.681 qpair failed and we were unable to recover it. 00:24:29.681 [2024-07-12 11:28:55.782787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.681 [2024-07-12 11:28:55.782889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.681 [2024-07-12 11:28:55.782915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.681 [2024-07-12 11:28:55.782930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.681 [2024-07-12 11:28:55.782944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.681 [2024-07-12 11:28:55.782973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.681 qpair failed and we were unable to recover it. 00:24:29.681 [2024-07-12 11:28:55.792807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.681 [2024-07-12 11:28:55.792900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.681 [2024-07-12 11:28:55.792928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.681 [2024-07-12 11:28:55.792943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.681 [2024-07-12 11:28:55.792957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.681 [2024-07-12 11:28:55.792986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.681 qpair failed and we were unable to recover it. 00:24:29.681 [2024-07-12 11:28:55.802822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.681 [2024-07-12 11:28:55.802917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.681 [2024-07-12 11:28:55.802942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.681 [2024-07-12 11:28:55.802957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.681 [2024-07-12 11:28:55.802970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.681 [2024-07-12 11:28:55.803000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.681 qpair failed and we were unable to recover it. 00:24:29.940 [2024-07-12 11:28:55.812873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.940 [2024-07-12 11:28:55.812962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.940 [2024-07-12 11:28:55.812988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.940 [2024-07-12 11:28:55.813003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.940 [2024-07-12 11:28:55.813016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.940 [2024-07-12 11:28:55.813046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.940 qpair failed and we were unable to recover it. 00:24:29.940 [2024-07-12 11:28:55.822925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.940 [2024-07-12 11:28:55.823018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.940 [2024-07-12 11:28:55.823051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.940 [2024-07-12 11:28:55.823067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.940 [2024-07-12 11:28:55.823080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.940 [2024-07-12 11:28:55.823109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.940 qpair failed and we were unable to recover it. 00:24:29.940 [2024-07-12 11:28:55.832929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.940 [2024-07-12 11:28:55.833016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.940 [2024-07-12 11:28:55.833040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.940 [2024-07-12 11:28:55.833055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.940 [2024-07-12 11:28:55.833068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.940 [2024-07-12 11:28:55.833098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.940 qpair failed and we were unable to recover it. 00:24:29.940 [2024-07-12 11:28:55.842957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.940 [2024-07-12 11:28:55.843045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.940 [2024-07-12 11:28:55.843070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.940 [2024-07-12 11:28:55.843085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.940 [2024-07-12 11:28:55.843098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.940 [2024-07-12 11:28:55.843127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.940 qpair failed and we were unable to recover it. 00:24:29.940 [2024-07-12 11:28:55.852986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.940 [2024-07-12 11:28:55.853083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.940 [2024-07-12 11:28:55.853107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.940 [2024-07-12 11:28:55.853122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.940 [2024-07-12 11:28:55.853135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.940 [2024-07-12 11:28:55.853164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.940 qpair failed and we were unable to recover it. 00:24:29.940 [2024-07-12 11:28:55.863026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.941 [2024-07-12 11:28:55.863114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.941 [2024-07-12 11:28:55.863138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.941 [2024-07-12 11:28:55.863154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.941 [2024-07-12 11:28:55.863167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.941 [2024-07-12 11:28:55.863202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.941 qpair failed and we were unable to recover it. 00:24:29.941 [2024-07-12 11:28:55.873064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.941 [2024-07-12 11:28:55.873151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.941 [2024-07-12 11:28:55.873179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.941 [2024-07-12 11:28:55.873196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.941 [2024-07-12 11:28:55.873209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.941 [2024-07-12 11:28:55.873239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.941 qpair failed and we were unable to recover it. 00:24:29.941 [2024-07-12 11:28:55.883075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.941 [2024-07-12 11:28:55.883172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.941 [2024-07-12 11:28:55.883198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.941 [2024-07-12 11:28:55.883213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.941 [2024-07-12 11:28:55.883226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.941 [2024-07-12 11:28:55.883256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.941 qpair failed and we were unable to recover it. 00:24:29.941 [2024-07-12 11:28:55.893153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.941 [2024-07-12 11:28:55.893239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.941 [2024-07-12 11:28:55.893264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.941 [2024-07-12 11:28:55.893278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.941 [2024-07-12 11:28:55.893292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.941 [2024-07-12 11:28:55.893321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.941 qpair failed and we were unable to recover it. 00:24:29.941 [2024-07-12 11:28:55.903140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.941 [2024-07-12 11:28:55.903233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.941 [2024-07-12 11:28:55.903258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.941 [2024-07-12 11:28:55.903274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.941 [2024-07-12 11:28:55.903287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.941 [2024-07-12 11:28:55.903316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.941 qpair failed and we were unable to recover it. 00:24:29.941 [2024-07-12 11:28:55.913184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.941 [2024-07-12 11:28:55.913270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.941 [2024-07-12 11:28:55.913300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.941 [2024-07-12 11:28:55.913316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.941 [2024-07-12 11:28:55.913329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.941 [2024-07-12 11:28:55.913358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.941 qpair failed and we were unable to recover it. 00:24:29.941 [2024-07-12 11:28:55.923210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.941 [2024-07-12 11:28:55.923301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.941 [2024-07-12 11:28:55.923326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.941 [2024-07-12 11:28:55.923340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.941 [2024-07-12 11:28:55.923353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.941 [2024-07-12 11:28:55.923382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.941 qpair failed and we were unable to recover it. 00:24:29.941 [2024-07-12 11:28:55.933216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.941 [2024-07-12 11:28:55.933303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.941 [2024-07-12 11:28:55.933327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.941 [2024-07-12 11:28:55.933342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.941 [2024-07-12 11:28:55.933356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.941 [2024-07-12 11:28:55.933385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.941 qpair failed and we were unable to recover it. 00:24:29.941 [2024-07-12 11:28:55.943334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.941 [2024-07-12 11:28:55.943430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.941 [2024-07-12 11:28:55.943458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.941 [2024-07-12 11:28:55.943475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.941 [2024-07-12 11:28:55.943488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.941 [2024-07-12 11:28:55.943518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.941 qpair failed and we were unable to recover it. 00:24:29.941 [2024-07-12 11:28:55.953323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.941 [2024-07-12 11:28:55.953433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.941 [2024-07-12 11:28:55.953459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.941 [2024-07-12 11:28:55.953475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.941 [2024-07-12 11:28:55.953493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.941 [2024-07-12 11:28:55.953524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.941 qpair failed and we were unable to recover it. 00:24:29.941 [2024-07-12 11:28:55.963328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.941 [2024-07-12 11:28:55.963447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.941 [2024-07-12 11:28:55.963477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.941 [2024-07-12 11:28:55.963494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.941 [2024-07-12 11:28:55.963507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.941 [2024-07-12 11:28:55.963537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.941 qpair failed and we were unable to recover it. 00:24:29.941 [2024-07-12 11:28:55.973335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.941 [2024-07-12 11:28:55.973431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.941 [2024-07-12 11:28:55.973456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.941 [2024-07-12 11:28:55.973470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.941 [2024-07-12 11:28:55.973483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.941 [2024-07-12 11:28:55.973513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.941 qpair failed and we were unable to recover it. 00:24:29.941 [2024-07-12 11:28:55.983364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.941 [2024-07-12 11:28:55.983471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.941 [2024-07-12 11:28:55.983497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.941 [2024-07-12 11:28:55.983512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.941 [2024-07-12 11:28:55.983525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.941 [2024-07-12 11:28:55.983554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.941 qpair failed and we were unable to recover it. 00:24:29.941 [2024-07-12 11:28:55.993416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.941 [2024-07-12 11:28:55.993502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.941 [2024-07-12 11:28:55.993527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.942 [2024-07-12 11:28:55.993542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.942 [2024-07-12 11:28:55.993555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.942 [2024-07-12 11:28:55.993583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.942 qpair failed and we were unable to recover it. 00:24:29.942 [2024-07-12 11:28:56.003409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.942 [2024-07-12 11:28:56.003492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.942 [2024-07-12 11:28:56.003517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.942 [2024-07-12 11:28:56.003531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.942 [2024-07-12 11:28:56.003544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.942 [2024-07-12 11:28:56.003574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.942 qpair failed and we were unable to recover it. 00:24:29.942 [2024-07-12 11:28:56.013469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.942 [2024-07-12 11:28:56.013556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.942 [2024-07-12 11:28:56.013580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.942 [2024-07-12 11:28:56.013595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.942 [2024-07-12 11:28:56.013608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.942 [2024-07-12 11:28:56.013638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.942 qpair failed and we were unable to recover it. 00:24:29.942 [2024-07-12 11:28:56.023512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.942 [2024-07-12 11:28:56.023605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.942 [2024-07-12 11:28:56.023629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.942 [2024-07-12 11:28:56.023644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.942 [2024-07-12 11:28:56.023657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.942 [2024-07-12 11:28:56.023686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.942 qpair failed and we were unable to recover it. 00:24:29.942 [2024-07-12 11:28:56.033510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.942 [2024-07-12 11:28:56.033599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.942 [2024-07-12 11:28:56.033627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.942 [2024-07-12 11:28:56.033644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.942 [2024-07-12 11:28:56.033657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.942 [2024-07-12 11:28:56.033688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.942 qpair failed and we were unable to recover it. 00:24:29.942 [2024-07-12 11:28:56.043556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.942 [2024-07-12 11:28:56.043647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.942 [2024-07-12 11:28:56.043672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.942 [2024-07-12 11:28:56.043687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.942 [2024-07-12 11:28:56.043705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.942 [2024-07-12 11:28:56.043736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.942 qpair failed and we were unable to recover it. 00:24:29.942 [2024-07-12 11:28:56.053541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.942 [2024-07-12 11:28:56.053626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.942 [2024-07-12 11:28:56.053650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.942 [2024-07-12 11:28:56.053665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.942 [2024-07-12 11:28:56.053678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.942 [2024-07-12 11:28:56.053708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.942 qpair failed and we were unable to recover it. 00:24:29.942 [2024-07-12 11:28:56.063615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:29.942 [2024-07-12 11:28:56.063712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:29.942 [2024-07-12 11:28:56.063737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:29.942 [2024-07-12 11:28:56.063751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:29.942 [2024-07-12 11:28:56.063764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:29.942 [2024-07-12 11:28:56.063793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.942 qpair failed and we were unable to recover it. 00:24:30.201 [2024-07-12 11:28:56.073605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.201 [2024-07-12 11:28:56.073702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.201 [2024-07-12 11:28:56.073727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.201 [2024-07-12 11:28:56.073742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.201 [2024-07-12 11:28:56.073755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.201 [2024-07-12 11:28:56.073784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.201 qpair failed and we were unable to recover it. 00:24:30.201 [2024-07-12 11:28:56.083722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.201 [2024-07-12 11:28:56.083812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.201 [2024-07-12 11:28:56.083836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.201 [2024-07-12 11:28:56.083852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.201 [2024-07-12 11:28:56.083872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.201 [2024-07-12 11:28:56.083903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.201 qpair failed and we were unable to recover it. 00:24:30.201 [2024-07-12 11:28:56.093671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.201 [2024-07-12 11:28:56.093764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.201 [2024-07-12 11:28:56.093788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.201 [2024-07-12 11:28:56.093803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.201 [2024-07-12 11:28:56.093816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.201 [2024-07-12 11:28:56.093845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.201 qpair failed and we were unable to recover it. 00:24:30.201 [2024-07-12 11:28:56.103695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.201 [2024-07-12 11:28:56.103784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.201 [2024-07-12 11:28:56.103809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.201 [2024-07-12 11:28:56.103823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.201 [2024-07-12 11:28:56.103836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.201 [2024-07-12 11:28:56.103872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.201 qpair failed and we were unable to recover it. 00:24:30.201 [2024-07-12 11:28:56.113715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.201 [2024-07-12 11:28:56.113803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.201 [2024-07-12 11:28:56.113827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.201 [2024-07-12 11:28:56.113843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.201 [2024-07-12 11:28:56.113856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.201 [2024-07-12 11:28:56.113892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.201 qpair failed and we were unable to recover it. 00:24:30.201 [2024-07-12 11:28:56.123748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.201 [2024-07-12 11:28:56.123835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.201 [2024-07-12 11:28:56.123860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.201 [2024-07-12 11:28:56.123885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.201 [2024-07-12 11:28:56.123910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.201 [2024-07-12 11:28:56.123941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.201 qpair failed and we were unable to recover it. 00:24:30.201 [2024-07-12 11:28:56.133782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.201 [2024-07-12 11:28:56.133873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.201 [2024-07-12 11:28:56.133899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.201 [2024-07-12 11:28:56.133919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.201 [2024-07-12 11:28:56.133933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.201 [2024-07-12 11:28:56.133976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.201 qpair failed and we were unable to recover it. 00:24:30.201 [2024-07-12 11:28:56.143818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.201 [2024-07-12 11:28:56.143916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.201 [2024-07-12 11:28:56.143942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.201 [2024-07-12 11:28:56.143957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.201 [2024-07-12 11:28:56.143970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.201 [2024-07-12 11:28:56.144012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.201 qpair failed and we were unable to recover it. 00:24:30.201 [2024-07-12 11:28:56.153829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.201 [2024-07-12 11:28:56.153923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.201 [2024-07-12 11:28:56.153949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.201 [2024-07-12 11:28:56.153964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.201 [2024-07-12 11:28:56.153977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.201 [2024-07-12 11:28:56.154007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.201 qpair failed and we were unable to recover it. 00:24:30.201 [2024-07-12 11:28:56.163888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.201 [2024-07-12 11:28:56.164007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.201 [2024-07-12 11:28:56.164034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.201 [2024-07-12 11:28:56.164050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.201 [2024-07-12 11:28:56.164062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.201 [2024-07-12 11:28:56.164092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.201 qpair failed and we were unable to recover it. 00:24:30.201 [2024-07-12 11:28:56.173905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.201 [2024-07-12 11:28:56.173994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.201 [2024-07-12 11:28:56.174018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.201 [2024-07-12 11:28:56.174032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.201 [2024-07-12 11:28:56.174045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.201 [2024-07-12 11:28:56.174075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.201 qpair failed and we were unable to recover it. 00:24:30.201 [2024-07-12 11:28:56.183942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.201 [2024-07-12 11:28:56.184031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.201 [2024-07-12 11:28:56.184055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.201 [2024-07-12 11:28:56.184070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.201 [2024-07-12 11:28:56.184083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.201 [2024-07-12 11:28:56.184112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.201 qpair failed and we were unable to recover it. 00:24:30.201 [2024-07-12 11:28:56.193952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.202 [2024-07-12 11:28:56.194035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.202 [2024-07-12 11:28:56.194060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.202 [2024-07-12 11:28:56.194074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.202 [2024-07-12 11:28:56.194087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.202 [2024-07-12 11:28:56.194116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.202 qpair failed and we were unable to recover it. 00:24:30.202 [2024-07-12 11:28:56.203999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.202 [2024-07-12 11:28:56.204086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.202 [2024-07-12 11:28:56.204111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.202 [2024-07-12 11:28:56.204125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.202 [2024-07-12 11:28:56.204138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.202 [2024-07-12 11:28:56.204167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.202 qpair failed and we were unable to recover it. 00:24:30.202 [2024-07-12 11:28:56.214042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.202 [2024-07-12 11:28:56.214131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.202 [2024-07-12 11:28:56.214155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.202 [2024-07-12 11:28:56.214169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.202 [2024-07-12 11:28:56.214183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.202 [2024-07-12 11:28:56.214212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.202 qpair failed and we were unable to recover it. 00:24:30.202 [2024-07-12 11:28:56.224050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.202 [2024-07-12 11:28:56.224144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.202 [2024-07-12 11:28:56.224173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.202 [2024-07-12 11:28:56.224189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.202 [2024-07-12 11:28:56.224202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.202 [2024-07-12 11:28:56.224232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.202 qpair failed and we were unable to recover it. 00:24:30.202 [2024-07-12 11:28:56.234097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.202 [2024-07-12 11:28:56.234185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.202 [2024-07-12 11:28:56.234209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.202 [2024-07-12 11:28:56.234225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.202 [2024-07-12 11:28:56.234238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.202 [2024-07-12 11:28:56.234267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.202 qpair failed and we were unable to recover it. 00:24:30.202 [2024-07-12 11:28:56.244204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.202 [2024-07-12 11:28:56.244316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.202 [2024-07-12 11:28:56.244343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.202 [2024-07-12 11:28:56.244359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.202 [2024-07-12 11:28:56.244371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.202 [2024-07-12 11:28:56.244400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.202 qpair failed and we were unable to recover it. 00:24:30.202 [2024-07-12 11:28:56.254139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.202 [2024-07-12 11:28:56.254224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.202 [2024-07-12 11:28:56.254249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.202 [2024-07-12 11:28:56.254263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.202 [2024-07-12 11:28:56.254277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.202 [2024-07-12 11:28:56.254306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.202 qpair failed and we were unable to recover it. 00:24:30.202 [2024-07-12 11:28:56.264189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.202 [2024-07-12 11:28:56.264312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.202 [2024-07-12 11:28:56.264338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.202 [2024-07-12 11:28:56.264354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.202 [2024-07-12 11:28:56.264366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.202 [2024-07-12 11:28:56.264415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.202 qpair failed and we were unable to recover it. 00:24:30.202 [2024-07-12 11:28:56.274270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.202 [2024-07-12 11:28:56.274401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.202 [2024-07-12 11:28:56.274427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.202 [2024-07-12 11:28:56.274443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.202 [2024-07-12 11:28:56.274455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.202 [2024-07-12 11:28:56.274484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.202 qpair failed and we were unable to recover it. 00:24:30.202 [2024-07-12 11:28:56.284227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.202 [2024-07-12 11:28:56.284308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.202 [2024-07-12 11:28:56.284332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.202 [2024-07-12 11:28:56.284346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.202 [2024-07-12 11:28:56.284359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.202 [2024-07-12 11:28:56.284401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.202 qpair failed and we were unable to recover it. 00:24:30.202 [2024-07-12 11:28:56.294266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.202 [2024-07-12 11:28:56.294393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.202 [2024-07-12 11:28:56.294420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.202 [2024-07-12 11:28:56.294435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.202 [2024-07-12 11:28:56.294447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.202 [2024-07-12 11:28:56.294477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.202 qpair failed and we were unable to recover it. 00:24:30.202 [2024-07-12 11:28:56.304311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.202 [2024-07-12 11:28:56.304411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.202 [2024-07-12 11:28:56.304436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.202 [2024-07-12 11:28:56.304451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.202 [2024-07-12 11:28:56.304463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.202 [2024-07-12 11:28:56.304492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.202 qpair failed and we were unable to recover it. 00:24:30.202 [2024-07-12 11:28:56.314357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.202 [2024-07-12 11:28:56.314468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.202 [2024-07-12 11:28:56.314501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.202 [2024-07-12 11:28:56.314517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.202 [2024-07-12 11:28:56.314531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.203 [2024-07-12 11:28:56.314563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.203 qpair failed and we were unable to recover it. 00:24:30.203 [2024-07-12 11:28:56.324313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.203 [2024-07-12 11:28:56.324403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.203 [2024-07-12 11:28:56.324428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.203 [2024-07-12 11:28:56.324443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.203 [2024-07-12 11:28:56.324455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.203 [2024-07-12 11:28:56.324485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.203 qpair failed and we were unable to recover it. 00:24:30.461 [2024-07-12 11:28:56.334425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.461 [2024-07-12 11:28:56.334514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.461 [2024-07-12 11:28:56.334540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.461 [2024-07-12 11:28:56.334559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.461 [2024-07-12 11:28:56.334574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.461 [2024-07-12 11:28:56.334604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.461 qpair failed and we were unable to recover it. 00:24:30.461 [2024-07-12 11:28:56.344381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.461 [2024-07-12 11:28:56.344469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.461 [2024-07-12 11:28:56.344494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.461 [2024-07-12 11:28:56.344508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.461 [2024-07-12 11:28:56.344521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.461 [2024-07-12 11:28:56.344550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.461 qpair failed and we were unable to recover it. 00:24:30.461 [2024-07-12 11:28:56.354426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.461 [2024-07-12 11:28:56.354518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.461 [2024-07-12 11:28:56.354542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.461 [2024-07-12 11:28:56.354557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.461 [2024-07-12 11:28:56.354575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.461 [2024-07-12 11:28:56.354605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.461 qpair failed and we were unable to recover it. 00:24:30.461 [2024-07-12 11:28:56.364445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.461 [2024-07-12 11:28:56.364527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.461 [2024-07-12 11:28:56.364551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.461 [2024-07-12 11:28:56.364566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.461 [2024-07-12 11:28:56.364579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.461 [2024-07-12 11:28:56.364608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.461 qpair failed and we were unable to recover it. 00:24:30.461 [2024-07-12 11:28:56.374468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.461 [2024-07-12 11:28:56.374605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.461 [2024-07-12 11:28:56.374635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.461 [2024-07-12 11:28:56.374652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.461 [2024-07-12 11:28:56.374665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.461 [2024-07-12 11:28:56.374710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.461 qpair failed and we were unable to recover it. 00:24:30.461 [2024-07-12 11:28:56.384581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.461 [2024-07-12 11:28:56.384674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.461 [2024-07-12 11:28:56.384699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.461 [2024-07-12 11:28:56.384713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.461 [2024-07-12 11:28:56.384727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.461 [2024-07-12 11:28:56.384756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.461 qpair failed and we were unable to recover it. 00:24:30.461 [2024-07-12 11:28:56.394524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.461 [2024-07-12 11:28:56.394642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.461 [2024-07-12 11:28:56.394668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.461 [2024-07-12 11:28:56.394683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.461 [2024-07-12 11:28:56.394696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.461 [2024-07-12 11:28:56.394725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.461 qpair failed and we were unable to recover it. 00:24:30.461 [2024-07-12 11:28:56.404547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.461 [2024-07-12 11:28:56.404637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.461 [2024-07-12 11:28:56.404662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.461 [2024-07-12 11:28:56.404676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.461 [2024-07-12 11:28:56.404689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.461 [2024-07-12 11:28:56.404719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.461 qpair failed and we were unable to recover it. 00:24:30.461 [2024-07-12 11:28:56.414566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.461 [2024-07-12 11:28:56.414675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.461 [2024-07-12 11:28:56.414701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.461 [2024-07-12 11:28:56.414717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.461 [2024-07-12 11:28:56.414730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.461 [2024-07-12 11:28:56.414759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.461 qpair failed and we were unable to recover it. 00:24:30.461 [2024-07-12 11:28:56.424652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.461 [2024-07-12 11:28:56.424750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.461 [2024-07-12 11:28:56.424780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.461 [2024-07-12 11:28:56.424796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.461 [2024-07-12 11:28:56.424810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.461 [2024-07-12 11:28:56.424852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.461 qpair failed and we were unable to recover it. 00:24:30.461 [2024-07-12 11:28:56.434646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.461 [2024-07-12 11:28:56.434755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.461 [2024-07-12 11:28:56.434782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.461 [2024-07-12 11:28:56.434798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.461 [2024-07-12 11:28:56.434812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.461 [2024-07-12 11:28:56.434841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.461 qpair failed and we were unable to recover it. 00:24:30.461 [2024-07-12 11:28:56.444708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.461 [2024-07-12 11:28:56.444818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.461 [2024-07-12 11:28:56.444848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.462 [2024-07-12 11:28:56.444873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.462 [2024-07-12 11:28:56.444896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.462 [2024-07-12 11:28:56.444928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.462 qpair failed and we were unable to recover it. 00:24:30.462 [2024-07-12 11:28:56.454705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.462 [2024-07-12 11:28:56.454821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.462 [2024-07-12 11:28:56.454848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.462 [2024-07-12 11:28:56.454863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.462 [2024-07-12 11:28:56.454885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.462 [2024-07-12 11:28:56.454915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.462 qpair failed and we were unable to recover it. 00:24:30.462 [2024-07-12 11:28:56.464739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.462 [2024-07-12 11:28:56.464828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.462 [2024-07-12 11:28:56.464853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.462 [2024-07-12 11:28:56.464875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.462 [2024-07-12 11:28:56.464891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.462 [2024-07-12 11:28:56.464921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.462 qpair failed and we were unable to recover it. 00:24:30.462 [2024-07-12 11:28:56.474797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.462 [2024-07-12 11:28:56.474897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.462 [2024-07-12 11:28:56.474922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.462 [2024-07-12 11:28:56.474937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.462 [2024-07-12 11:28:56.474950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.462 [2024-07-12 11:28:56.474980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.462 qpair failed and we were unable to recover it. 00:24:30.462 [2024-07-12 11:28:56.484780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.462 [2024-07-12 11:28:56.484873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.462 [2024-07-12 11:28:56.484899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.462 [2024-07-12 11:28:56.484914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.462 [2024-07-12 11:28:56.484927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.462 [2024-07-12 11:28:56.484955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.462 qpair failed and we were unable to recover it. 00:24:30.462 [2024-07-12 11:28:56.494871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.462 [2024-07-12 11:28:56.494959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.462 [2024-07-12 11:28:56.494984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.462 [2024-07-12 11:28:56.494999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.462 [2024-07-12 11:28:56.495012] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.462 [2024-07-12 11:28:56.495042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.462 qpair failed and we were unable to recover it. 00:24:30.462 [2024-07-12 11:28:56.504847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.462 [2024-07-12 11:28:56.504947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.462 [2024-07-12 11:28:56.504971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.462 [2024-07-12 11:28:56.504986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.462 [2024-07-12 11:28:56.504999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.462 [2024-07-12 11:28:56.505028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.462 qpair failed and we were unable to recover it. 00:24:30.462 [2024-07-12 11:28:56.514924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.462 [2024-07-12 11:28:56.515068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.462 [2024-07-12 11:28:56.515098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.462 [2024-07-12 11:28:56.515115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.462 [2024-07-12 11:28:56.515128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.462 [2024-07-12 11:28:56.515159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.462 qpair failed and we were unable to recover it. 00:24:30.462 [2024-07-12 11:28:56.524903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.462 [2024-07-12 11:28:56.524988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.462 [2024-07-12 11:28:56.525013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.462 [2024-07-12 11:28:56.525029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.462 [2024-07-12 11:28:56.525042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.462 [2024-07-12 11:28:56.525085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.462 qpair failed and we were unable to recover it. 00:24:30.462 [2024-07-12 11:28:56.534918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.462 [2024-07-12 11:28:56.535000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.462 [2024-07-12 11:28:56.535025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.462 [2024-07-12 11:28:56.535046] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.462 [2024-07-12 11:28:56.535060] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.462 [2024-07-12 11:28:56.535089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.462 qpair failed and we were unable to recover it. 00:24:30.462 [2024-07-12 11:28:56.544957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.462 [2024-07-12 11:28:56.545049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.462 [2024-07-12 11:28:56.545074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.462 [2024-07-12 11:28:56.545088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.462 [2024-07-12 11:28:56.545101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.462 [2024-07-12 11:28:56.545130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.462 qpair failed and we were unable to recover it. 00:24:30.462 [2024-07-12 11:28:56.555046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.462 [2024-07-12 11:28:56.555164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.462 [2024-07-12 11:28:56.555191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.462 [2024-07-12 11:28:56.555206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.462 [2024-07-12 11:28:56.555220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.463 [2024-07-12 11:28:56.555249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.463 qpair failed and we were unable to recover it. 00:24:30.463 [2024-07-12 11:28:56.565011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.463 [2024-07-12 11:28:56.565095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.463 [2024-07-12 11:28:56.565121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.463 [2024-07-12 11:28:56.565139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.463 [2024-07-12 11:28:56.565153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.463 [2024-07-12 11:28:56.565184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.463 qpair failed and we were unable to recover it. 00:24:30.463 [2024-07-12 11:28:56.575060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.463 [2024-07-12 11:28:56.575170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.463 [2024-07-12 11:28:56.575200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.463 [2024-07-12 11:28:56.575216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.463 [2024-07-12 11:28:56.575229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.463 [2024-07-12 11:28:56.575271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.463 qpair failed and we were unable to recover it. 00:24:30.463 [2024-07-12 11:28:56.585094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.463 [2024-07-12 11:28:56.585209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.463 [2024-07-12 11:28:56.585236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.463 [2024-07-12 11:28:56.585252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.463 [2024-07-12 11:28:56.585264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.463 [2024-07-12 11:28:56.585294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.463 qpair failed and we were unable to recover it. 00:24:30.722 [2024-07-12 11:28:56.595139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.722 [2024-07-12 11:28:56.595253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.722 [2024-07-12 11:28:56.595279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.722 [2024-07-12 11:28:56.595294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.722 [2024-07-12 11:28:56.595308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.722 [2024-07-12 11:28:56.595337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.722 qpair failed and we were unable to recover it. 00:24:30.722 [2024-07-12 11:28:56.605138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.722 [2024-07-12 11:28:56.605235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.722 [2024-07-12 11:28:56.605259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.722 [2024-07-12 11:28:56.605274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.722 [2024-07-12 11:28:56.605287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.722 [2024-07-12 11:28:56.605316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.722 qpair failed and we were unable to recover it. 00:24:30.722 [2024-07-12 11:28:56.615149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.722 [2024-07-12 11:28:56.615244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.722 [2024-07-12 11:28:56.615271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.722 [2024-07-12 11:28:56.615288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.722 [2024-07-12 11:28:56.615302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.722 [2024-07-12 11:28:56.615332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.722 qpair failed and we were unable to recover it. 00:24:30.722 [2024-07-12 11:28:56.625188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.722 [2024-07-12 11:28:56.625278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.722 [2024-07-12 11:28:56.625307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.722 [2024-07-12 11:28:56.625323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.722 [2024-07-12 11:28:56.625336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.722 [2024-07-12 11:28:56.625368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.722 qpair failed and we were unable to recover it. 00:24:30.722 [2024-07-12 11:28:56.635213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.722 [2024-07-12 11:28:56.635353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.722 [2024-07-12 11:28:56.635379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.722 [2024-07-12 11:28:56.635395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.722 [2024-07-12 11:28:56.635409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.722 [2024-07-12 11:28:56.635454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.722 qpair failed and we were unable to recover it. 00:24:30.722 [2024-07-12 11:28:56.645243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.722 [2024-07-12 11:28:56.645333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.722 [2024-07-12 11:28:56.645359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.722 [2024-07-12 11:28:56.645374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.722 [2024-07-12 11:28:56.645390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.722 [2024-07-12 11:28:56.645420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.722 qpair failed and we were unable to recover it. 00:24:30.722 [2024-07-12 11:28:56.655328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.722 [2024-07-12 11:28:56.655414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.722 [2024-07-12 11:28:56.655439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.722 [2024-07-12 11:28:56.655453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.722 [2024-07-12 11:28:56.655466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.722 [2024-07-12 11:28:56.655496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.722 qpair failed and we were unable to recover it. 00:24:30.722 [2024-07-12 11:28:56.665303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.722 [2024-07-12 11:28:56.665397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.722 [2024-07-12 11:28:56.665422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.722 [2024-07-12 11:28:56.665437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.722 [2024-07-12 11:28:56.665451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.722 [2024-07-12 11:28:56.665487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.722 qpair failed and we were unable to recover it. 00:24:30.722 [2024-07-12 11:28:56.675327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.722 [2024-07-12 11:28:56.675416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.722 [2024-07-12 11:28:56.675441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.722 [2024-07-12 11:28:56.675456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.722 [2024-07-12 11:28:56.675469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.722 [2024-07-12 11:28:56.675499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.722 qpair failed and we were unable to recover it. 00:24:30.722 [2024-07-12 11:28:56.685369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.722 [2024-07-12 11:28:56.685452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.722 [2024-07-12 11:28:56.685476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.722 [2024-07-12 11:28:56.685491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.722 [2024-07-12 11:28:56.685504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.722 [2024-07-12 11:28:56.685534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.722 qpair failed and we were unable to recover it. 00:24:30.722 [2024-07-12 11:28:56.695367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.722 [2024-07-12 11:28:56.695477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.722 [2024-07-12 11:28:56.695502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.722 [2024-07-12 11:28:56.695517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.722 [2024-07-12 11:28:56.695531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.722 [2024-07-12 11:28:56.695561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.722 qpair failed and we were unable to recover it. 00:24:30.722 [2024-07-12 11:28:56.705452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.722 [2024-07-12 11:28:56.705564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.722 [2024-07-12 11:28:56.705589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.722 [2024-07-12 11:28:56.705605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.722 [2024-07-12 11:28:56.705620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.723 [2024-07-12 11:28:56.705650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.723 qpair failed and we were unable to recover it. 00:24:30.723 [2024-07-12 11:28:56.715433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.723 [2024-07-12 11:28:56.715563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.723 [2024-07-12 11:28:56.715594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.723 [2024-07-12 11:28:56.715611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.723 [2024-07-12 11:28:56.715625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.723 [2024-07-12 11:28:56.715655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.723 qpair failed and we were unable to recover it. 00:24:30.723 [2024-07-12 11:28:56.725496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.723 [2024-07-12 11:28:56.725604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.723 [2024-07-12 11:28:56.725629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.723 [2024-07-12 11:28:56.725646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.723 [2024-07-12 11:28:56.725660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.723 [2024-07-12 11:28:56.725703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.723 qpair failed and we were unable to recover it. 00:24:30.723 [2024-07-12 11:28:56.735485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.723 [2024-07-12 11:28:56.735571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.723 [2024-07-12 11:28:56.735595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.723 [2024-07-12 11:28:56.735610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.723 [2024-07-12 11:28:56.735623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.723 [2024-07-12 11:28:56.735653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.723 qpair failed and we were unable to recover it. 00:24:30.723 [2024-07-12 11:28:56.745520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.723 [2024-07-12 11:28:56.745638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.723 [2024-07-12 11:28:56.745662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.723 [2024-07-12 11:28:56.745677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.723 [2024-07-12 11:28:56.745692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.723 [2024-07-12 11:28:56.745722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.723 qpair failed and we were unable to recover it. 00:24:30.723 [2024-07-12 11:28:56.755578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.723 [2024-07-12 11:28:56.755669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.723 [2024-07-12 11:28:56.755694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.723 [2024-07-12 11:28:56.755712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.723 [2024-07-12 11:28:56.755726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.723 [2024-07-12 11:28:56.755762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.723 qpair failed and we were unable to recover it. 00:24:30.723 [2024-07-12 11:28:56.765592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.723 [2024-07-12 11:28:56.765720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.723 [2024-07-12 11:28:56.765748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.723 [2024-07-12 11:28:56.765764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.723 [2024-07-12 11:28:56.765778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.723 [2024-07-12 11:28:56.765809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.723 qpair failed and we were unable to recover it. 00:24:30.723 [2024-07-12 11:28:56.775615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.723 [2024-07-12 11:28:56.775745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.723 [2024-07-12 11:28:56.775771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.723 [2024-07-12 11:28:56.775788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.723 [2024-07-12 11:28:56.775802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.723 [2024-07-12 11:28:56.775832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.723 qpair failed and we were unable to recover it. 00:24:30.723 [2024-07-12 11:28:56.785627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.723 [2024-07-12 11:28:56.785742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.723 [2024-07-12 11:28:56.785768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.723 [2024-07-12 11:28:56.785783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.723 [2024-07-12 11:28:56.785796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.723 [2024-07-12 11:28:56.785827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.723 qpair failed and we were unable to recover it. 00:24:30.723 [2024-07-12 11:28:56.795662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.723 [2024-07-12 11:28:56.795779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.723 [2024-07-12 11:28:56.795805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.723 [2024-07-12 11:28:56.795820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.723 [2024-07-12 11:28:56.795834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.723 [2024-07-12 11:28:56.795871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.723 qpair failed and we were unable to recover it. 00:24:30.723 [2024-07-12 11:28:56.805682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.723 [2024-07-12 11:28:56.805773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.723 [2024-07-12 11:28:56.805798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.723 [2024-07-12 11:28:56.805813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.723 [2024-07-12 11:28:56.805826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.723 [2024-07-12 11:28:56.805856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.723 qpair failed and we were unable to recover it. 00:24:30.723 [2024-07-12 11:28:56.815736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.723 [2024-07-12 11:28:56.815852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.723 [2024-07-12 11:28:56.815885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.723 [2024-07-12 11:28:56.815901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.723 [2024-07-12 11:28:56.815914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.723 [2024-07-12 11:28:56.815956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.723 qpair failed and we were unable to recover it. 00:24:30.723 [2024-07-12 11:28:56.825747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.723 [2024-07-12 11:28:56.825863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.723 [2024-07-12 11:28:56.825895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.723 [2024-07-12 11:28:56.825910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.723 [2024-07-12 11:28:56.825924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.723 [2024-07-12 11:28:56.825955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.723 qpair failed and we were unable to recover it. 00:24:30.723 [2024-07-12 11:28:56.835789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.723 [2024-07-12 11:28:56.835922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.724 [2024-07-12 11:28:56.835949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.724 [2024-07-12 11:28:56.835964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.724 [2024-07-12 11:28:56.835988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.724 [2024-07-12 11:28:56.836019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.724 qpair failed and we were unable to recover it. 00:24:30.724 [2024-07-12 11:28:56.845828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.724 [2024-07-12 11:28:56.845953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.724 [2024-07-12 11:28:56.845980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.724 [2024-07-12 11:28:56.845997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.724 [2024-07-12 11:28:56.846018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.724 [2024-07-12 11:28:56.846050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.724 qpair failed and we were unable to recover it. 00:24:30.982 [2024-07-12 11:28:56.855815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.982 [2024-07-12 11:28:56.855909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.982 [2024-07-12 11:28:56.855934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.982 [2024-07-12 11:28:56.855950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.982 [2024-07-12 11:28:56.855963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.982 [2024-07-12 11:28:56.855993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.982 qpair failed and we were unable to recover it. 00:24:30.982 [2024-07-12 11:28:56.865908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.982 [2024-07-12 11:28:56.866019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.982 [2024-07-12 11:28:56.866043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.982 [2024-07-12 11:28:56.866058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.982 [2024-07-12 11:28:56.866072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.982 [2024-07-12 11:28:56.866103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.982 qpair failed and we were unable to recover it. 00:24:30.982 [2024-07-12 11:28:56.875936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.982 [2024-07-12 11:28:56.876035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.982 [2024-07-12 11:28:56.876060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.982 [2024-07-12 11:28:56.876075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.982 [2024-07-12 11:28:56.876089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.982 [2024-07-12 11:28:56.876118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.982 qpair failed and we were unable to recover it. 00:24:30.982 [2024-07-12 11:28:56.885931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.982 [2024-07-12 11:28:56.886055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.983 [2024-07-12 11:28:56.886085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.983 [2024-07-12 11:28:56.886102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.983 [2024-07-12 11:28:56.886116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.983 [2024-07-12 11:28:56.886146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.983 qpair failed and we were unable to recover it. 00:24:30.983 [2024-07-12 11:28:56.895988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.983 [2024-07-12 11:28:56.896103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.983 [2024-07-12 11:28:56.896128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.983 [2024-07-12 11:28:56.896143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.983 [2024-07-12 11:28:56.896157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.983 [2024-07-12 11:28:56.896187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.983 qpair failed and we were unable to recover it. 00:24:30.983 [2024-07-12 11:28:56.906001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.983 [2024-07-12 11:28:56.906095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.983 [2024-07-12 11:28:56.906123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.983 [2024-07-12 11:28:56.906138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.983 [2024-07-12 11:28:56.906152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.983 [2024-07-12 11:28:56.906183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.983 qpair failed and we were unable to recover it. 00:24:30.983 [2024-07-12 11:28:56.916013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.983 [2024-07-12 11:28:56.916096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.983 [2024-07-12 11:28:56.916121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.983 [2024-07-12 11:28:56.916136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.983 [2024-07-12 11:28:56.916149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.983 [2024-07-12 11:28:56.916178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.983 qpair failed and we were unable to recover it. 00:24:30.983 [2024-07-12 11:28:56.926055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.983 [2024-07-12 11:28:56.926141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.983 [2024-07-12 11:28:56.926166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.983 [2024-07-12 11:28:56.926181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.983 [2024-07-12 11:28:56.926194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.983 [2024-07-12 11:28:56.926224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.983 qpair failed and we were unable to recover it. 00:24:30.983 [2024-07-12 11:28:56.936072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.983 [2024-07-12 11:28:56.936168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.983 [2024-07-12 11:28:56.936193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.983 [2024-07-12 11:28:56.936213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.983 [2024-07-12 11:28:56.936227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.983 [2024-07-12 11:28:56.936258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.983 qpair failed and we were unable to recover it. 00:24:30.983 [2024-07-12 11:28:56.946116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.983 [2024-07-12 11:28:56.946208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.983 [2024-07-12 11:28:56.946232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.983 [2024-07-12 11:28:56.946247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.983 [2024-07-12 11:28:56.946260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.983 [2024-07-12 11:28:56.946290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.983 qpair failed and we were unable to recover it. 00:24:30.983 [2024-07-12 11:28:56.956157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.983 [2024-07-12 11:28:56.956255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.983 [2024-07-12 11:28:56.956279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.983 [2024-07-12 11:28:56.956294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.983 [2024-07-12 11:28:56.956308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.983 [2024-07-12 11:28:56.956338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.983 qpair failed and we were unable to recover it. 00:24:30.983 [2024-07-12 11:28:56.966171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.983 [2024-07-12 11:28:56.966252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.983 [2024-07-12 11:28:56.966277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.983 [2024-07-12 11:28:56.966291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.983 [2024-07-12 11:28:56.966304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.983 [2024-07-12 11:28:56.966334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.983 qpair failed and we were unable to recover it. 00:24:30.983 [2024-07-12 11:28:56.976303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.983 [2024-07-12 11:28:56.976387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.983 [2024-07-12 11:28:56.976412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.983 [2024-07-12 11:28:56.976426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.983 [2024-07-12 11:28:56.976439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.983 [2024-07-12 11:28:56.976469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.983 qpair failed and we were unable to recover it. 00:24:30.983 [2024-07-12 11:28:56.986204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.983 [2024-07-12 11:28:56.986330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.983 [2024-07-12 11:28:56.986356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.983 [2024-07-12 11:28:56.986371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.983 [2024-07-12 11:28:56.986385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.983 [2024-07-12 11:28:56.986415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.983 qpair failed and we were unable to recover it. 00:24:30.983 [2024-07-12 11:28:56.996240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.983 [2024-07-12 11:28:56.996331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.983 [2024-07-12 11:28:56.996359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.983 [2024-07-12 11:28:56.996376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.983 [2024-07-12 11:28:56.996389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.983 [2024-07-12 11:28:56.996420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.983 qpair failed and we were unable to recover it. 00:24:30.983 [2024-07-12 11:28:57.006250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.983 [2024-07-12 11:28:57.006335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.983 [2024-07-12 11:28:57.006360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.983 [2024-07-12 11:28:57.006376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.983 [2024-07-12 11:28:57.006389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.983 [2024-07-12 11:28:57.006418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.983 qpair failed and we were unable to recover it. 00:24:30.983 [2024-07-12 11:28:57.016280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.983 [2024-07-12 11:28:57.016369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.984 [2024-07-12 11:28:57.016394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.984 [2024-07-12 11:28:57.016408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.984 [2024-07-12 11:28:57.016421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.984 [2024-07-12 11:28:57.016452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.984 qpair failed and we were unable to recover it. 00:24:30.984 [2024-07-12 11:28:57.026356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.984 [2024-07-12 11:28:57.026462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.984 [2024-07-12 11:28:57.026487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.984 [2024-07-12 11:28:57.026508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.984 [2024-07-12 11:28:57.026522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.984 [2024-07-12 11:28:57.026553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.984 qpair failed and we were unable to recover it. 00:24:30.984 [2024-07-12 11:28:57.036369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.984 [2024-07-12 11:28:57.036481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.984 [2024-07-12 11:28:57.036506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.984 [2024-07-12 11:28:57.036521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.984 [2024-07-12 11:28:57.036535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.984 [2024-07-12 11:28:57.036566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.984 qpair failed and we were unable to recover it. 00:24:30.984 [2024-07-12 11:28:57.046396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.984 [2024-07-12 11:28:57.046488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.984 [2024-07-12 11:28:57.046516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.984 [2024-07-12 11:28:57.046531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.984 [2024-07-12 11:28:57.046545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.984 [2024-07-12 11:28:57.046575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.984 qpair failed and we were unable to recover it. 00:24:30.984 [2024-07-12 11:28:57.056429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.984 [2024-07-12 11:28:57.056524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.984 [2024-07-12 11:28:57.056549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.984 [2024-07-12 11:28:57.056563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.984 [2024-07-12 11:28:57.056576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.984 [2024-07-12 11:28:57.056606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.984 qpair failed and we were unable to recover it. 00:24:30.984 [2024-07-12 11:28:57.066462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.984 [2024-07-12 11:28:57.066556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.984 [2024-07-12 11:28:57.066581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.984 [2024-07-12 11:28:57.066596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.984 [2024-07-12 11:28:57.066609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.984 [2024-07-12 11:28:57.066639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.984 qpair failed and we were unable to recover it. 00:24:30.984 [2024-07-12 11:28:57.076463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.984 [2024-07-12 11:28:57.076550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.984 [2024-07-12 11:28:57.076576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.984 [2024-07-12 11:28:57.076591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.984 [2024-07-12 11:28:57.076604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.984 [2024-07-12 11:28:57.076634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.984 qpair failed and we were unable to recover it. 00:24:30.984 [2024-07-12 11:28:57.086497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.984 [2024-07-12 11:28:57.086625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.984 [2024-07-12 11:28:57.086652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.984 [2024-07-12 11:28:57.086668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.984 [2024-07-12 11:28:57.086682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.984 [2024-07-12 11:28:57.086712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.984 qpair failed and we were unable to recover it. 00:24:30.984 [2024-07-12 11:28:57.096500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.984 [2024-07-12 11:28:57.096591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.984 [2024-07-12 11:28:57.096615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.984 [2024-07-12 11:28:57.096630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.984 [2024-07-12 11:28:57.096643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.984 [2024-07-12 11:28:57.096673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.984 qpair failed and we were unable to recover it. 00:24:30.984 [2024-07-12 11:28:57.106546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:30.984 [2024-07-12 11:28:57.106633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:30.984 [2024-07-12 11:28:57.106657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:30.984 [2024-07-12 11:28:57.106672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:30.984 [2024-07-12 11:28:57.106685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:30.984 [2024-07-12 11:28:57.106714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:30.984 qpair failed and we were unable to recover it. 00:24:31.242 [2024-07-12 11:28:57.116632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.242 [2024-07-12 11:28:57.116728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.242 [2024-07-12 11:28:57.116757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.242 [2024-07-12 11:28:57.116773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.242 [2024-07-12 11:28:57.116787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.242 [2024-07-12 11:28:57.116817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.242 qpair failed and we were unable to recover it. 00:24:31.242 [2024-07-12 11:28:57.126612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.242 [2024-07-12 11:28:57.126698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.242 [2024-07-12 11:28:57.126723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.242 [2024-07-12 11:28:57.126737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.242 [2024-07-12 11:28:57.126750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.242 [2024-07-12 11:28:57.126779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.242 qpair failed and we were unable to recover it. 00:24:31.242 [2024-07-12 11:28:57.136645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.242 [2024-07-12 11:28:57.136729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.242 [2024-07-12 11:28:57.136754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.242 [2024-07-12 11:28:57.136769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.242 [2024-07-12 11:28:57.136782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.242 [2024-07-12 11:28:57.136812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.242 qpair failed and we were unable to recover it. 00:24:31.242 [2024-07-12 11:28:57.146661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.242 [2024-07-12 11:28:57.146755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.242 [2024-07-12 11:28:57.146780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.242 [2024-07-12 11:28:57.146794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.242 [2024-07-12 11:28:57.146807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.242 [2024-07-12 11:28:57.146837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.242 qpair failed and we were unable to recover it. 00:24:31.242 [2024-07-12 11:28:57.156707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.242 [2024-07-12 11:28:57.156795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.242 [2024-07-12 11:28:57.156820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.242 [2024-07-12 11:28:57.156834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.242 [2024-07-12 11:28:57.156847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.242 [2024-07-12 11:28:57.156888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.242 qpair failed and we were unable to recover it. 00:24:31.242 [2024-07-12 11:28:57.166722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.242 [2024-07-12 11:28:57.166807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.242 [2024-07-12 11:28:57.166832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.242 [2024-07-12 11:28:57.166847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.242 [2024-07-12 11:28:57.166860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.242 [2024-07-12 11:28:57.166898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.242 qpair failed and we were unable to recover it. 00:24:31.242 [2024-07-12 11:28:57.176833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.242 [2024-07-12 11:28:57.176923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.242 [2024-07-12 11:28:57.176947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.242 [2024-07-12 11:28:57.176962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.242 [2024-07-12 11:28:57.176975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.242 [2024-07-12 11:28:57.177006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.242 qpair failed and we were unable to recover it. 00:24:31.242 [2024-07-12 11:28:57.186781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.242 [2024-07-12 11:28:57.186897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.242 [2024-07-12 11:28:57.186922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.242 [2024-07-12 11:28:57.186937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.242 [2024-07-12 11:28:57.186950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.242 [2024-07-12 11:28:57.186980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.242 qpair failed and we were unable to recover it. 00:24:31.242 [2024-07-12 11:28:57.196791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.242 [2024-07-12 11:28:57.196889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.242 [2024-07-12 11:28:57.196914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.242 [2024-07-12 11:28:57.196929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.243 [2024-07-12 11:28:57.196942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.243 [2024-07-12 11:28:57.196972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.243 qpair failed and we were unable to recover it. 00:24:31.243 [2024-07-12 11:28:57.206816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.243 [2024-07-12 11:28:57.206951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.243 [2024-07-12 11:28:57.206983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.243 [2024-07-12 11:28:57.206999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.243 [2024-07-12 11:28:57.207013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.243 [2024-07-12 11:28:57.207043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.243 qpair failed and we were unable to recover it. 00:24:31.243 [2024-07-12 11:28:57.216850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.243 [2024-07-12 11:28:57.216936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.243 [2024-07-12 11:28:57.216961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.243 [2024-07-12 11:28:57.216976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.243 [2024-07-12 11:28:57.216989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.243 [2024-07-12 11:28:57.217033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.243 qpair failed and we were unable to recover it. 00:24:31.243 [2024-07-12 11:28:57.226910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.243 [2024-07-12 11:28:57.227031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.243 [2024-07-12 11:28:57.227056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.243 [2024-07-12 11:28:57.227072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.243 [2024-07-12 11:28:57.227085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.243 [2024-07-12 11:28:57.227116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.243 qpair failed and we were unable to recover it. 00:24:31.243 [2024-07-12 11:28:57.236974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.243 [2024-07-12 11:28:57.237063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.243 [2024-07-12 11:28:57.237088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.243 [2024-07-12 11:28:57.237102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.243 [2024-07-12 11:28:57.237114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.243 [2024-07-12 11:28:57.237144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.243 qpair failed and we were unable to recover it. 00:24:31.243 [2024-07-12 11:28:57.246947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.243 [2024-07-12 11:28:57.247030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.243 [2024-07-12 11:28:57.247055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.243 [2024-07-12 11:28:57.247070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.243 [2024-07-12 11:28:57.247089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.243 [2024-07-12 11:28:57.247132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.243 qpair failed and we were unable to recover it. 00:24:31.243 [2024-07-12 11:28:57.256984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.243 [2024-07-12 11:28:57.257076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.243 [2024-07-12 11:28:57.257101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.243 [2024-07-12 11:28:57.257116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.243 [2024-07-12 11:28:57.257130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.243 [2024-07-12 11:28:57.257160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.243 qpair failed and we were unable to recover it. 00:24:31.243 [2024-07-12 11:28:57.267038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.243 [2024-07-12 11:28:57.267128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.243 [2024-07-12 11:28:57.267154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.243 [2024-07-12 11:28:57.267170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.243 [2024-07-12 11:28:57.267184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.243 [2024-07-12 11:28:57.267215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.243 qpair failed and we were unable to recover it. 00:24:31.243 [2024-07-12 11:28:57.277061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.243 [2024-07-12 11:28:57.277151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.243 [2024-07-12 11:28:57.277178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.243 [2024-07-12 11:28:57.277194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.243 [2024-07-12 11:28:57.277207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.243 [2024-07-12 11:28:57.277237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.243 qpair failed and we were unable to recover it. 00:24:31.243 [2024-07-12 11:28:57.287115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.243 [2024-07-12 11:28:57.287247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.243 [2024-07-12 11:28:57.287275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.243 [2024-07-12 11:28:57.287291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.243 [2024-07-12 11:28:57.287306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.243 [2024-07-12 11:28:57.287338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.243 qpair failed and we were unable to recover it. 00:24:31.243 [2024-07-12 11:28:57.297157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.243 [2024-07-12 11:28:57.297253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.243 [2024-07-12 11:28:57.297281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.244 [2024-07-12 11:28:57.297296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.244 [2024-07-12 11:28:57.297310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.244 [2024-07-12 11:28:57.297340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.244 qpair failed and we were unable to recover it. 00:24:31.244 [2024-07-12 11:28:57.307113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.244 [2024-07-12 11:28:57.307202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.244 [2024-07-12 11:28:57.307228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.244 [2024-07-12 11:28:57.307243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.244 [2024-07-12 11:28:57.307256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.244 [2024-07-12 11:28:57.307288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.244 qpair failed and we were unable to recover it. 00:24:31.244 [2024-07-12 11:28:57.317152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.244 [2024-07-12 11:28:57.317234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.244 [2024-07-12 11:28:57.317261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.244 [2024-07-12 11:28:57.317276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.244 [2024-07-12 11:28:57.317290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.244 [2024-07-12 11:28:57.317332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.244 qpair failed and we were unable to recover it. 00:24:31.244 [2024-07-12 11:28:57.327148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.244 [2024-07-12 11:28:57.327244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.244 [2024-07-12 11:28:57.327270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.244 [2024-07-12 11:28:57.327286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.244 [2024-07-12 11:28:57.327299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.244 [2024-07-12 11:28:57.327331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.244 qpair failed and we were unable to recover it. 00:24:31.244 [2024-07-12 11:28:57.337177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.244 [2024-07-12 11:28:57.337268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.244 [2024-07-12 11:28:57.337295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.244 [2024-07-12 11:28:57.337316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.244 [2024-07-12 11:28:57.337331] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.244 [2024-07-12 11:28:57.337361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.244 qpair failed and we were unable to recover it. 00:24:31.244 [2024-07-12 11:28:57.347227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.244 [2024-07-12 11:28:57.347319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.244 [2024-07-12 11:28:57.347345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.244 [2024-07-12 11:28:57.347361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.244 [2024-07-12 11:28:57.347374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.244 [2024-07-12 11:28:57.347419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.244 qpair failed and we were unable to recover it. 00:24:31.244 [2024-07-12 11:28:57.357289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.244 [2024-07-12 11:28:57.357378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.244 [2024-07-12 11:28:57.357404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.244 [2024-07-12 11:28:57.357420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.244 [2024-07-12 11:28:57.357434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.244 [2024-07-12 11:28:57.357464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.244 qpair failed and we were unable to recover it. 00:24:31.244 [2024-07-12 11:28:57.367318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.244 [2024-07-12 11:28:57.367416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.244 [2024-07-12 11:28:57.367440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.244 [2024-07-12 11:28:57.367455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.244 [2024-07-12 11:28:57.367468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.244 [2024-07-12 11:28:57.367498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.244 qpair failed and we were unable to recover it. 00:24:31.503 [2024-07-12 11:28:57.377338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.503 [2024-07-12 11:28:57.377468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.503 [2024-07-12 11:28:57.377495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.503 [2024-07-12 11:28:57.377510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.503 [2024-07-12 11:28:57.377523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.503 [2024-07-12 11:28:57.377554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.503 qpair failed and we were unable to recover it. 00:24:31.503 [2024-07-12 11:28:57.387364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.503 [2024-07-12 11:28:57.387454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.503 [2024-07-12 11:28:57.387480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.503 [2024-07-12 11:28:57.387496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.503 [2024-07-12 11:28:57.387509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.503 [2024-07-12 11:28:57.387552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.503 qpair failed and we were unable to recover it. 00:24:31.503 [2024-07-12 11:28:57.397406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.503 [2024-07-12 11:28:57.397540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.503 [2024-07-12 11:28:57.397567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.503 [2024-07-12 11:28:57.397583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.503 [2024-07-12 11:28:57.397596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.503 [2024-07-12 11:28:57.397627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.503 qpair failed and we were unable to recover it. 00:24:31.503 [2024-07-12 11:28:57.407404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.503 [2024-07-12 11:28:57.407492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.503 [2024-07-12 11:28:57.407519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.503 [2024-07-12 11:28:57.407535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.503 [2024-07-12 11:28:57.407548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.503 [2024-07-12 11:28:57.407580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.503 qpair failed and we were unable to recover it. 00:24:31.503 [2024-07-12 11:28:57.417399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.503 [2024-07-12 11:28:57.417486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.503 [2024-07-12 11:28:57.417511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.503 [2024-07-12 11:28:57.417526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.503 [2024-07-12 11:28:57.417539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.503 [2024-07-12 11:28:57.417569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.503 qpair failed and we were unable to recover it. 00:24:31.503 [2024-07-12 11:28:57.427472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.503 [2024-07-12 11:28:57.427606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.503 [2024-07-12 11:28:57.427637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.503 [2024-07-12 11:28:57.427659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.503 [2024-07-12 11:28:57.427674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.503 [2024-07-12 11:28:57.427718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.503 qpair failed and we were unable to recover it. 00:24:31.503 [2024-07-12 11:28:57.437530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.503 [2024-07-12 11:28:57.437632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.503 [2024-07-12 11:28:57.437657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.503 [2024-07-12 11:28:57.437672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.503 [2024-07-12 11:28:57.437685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.503 [2024-07-12 11:28:57.437715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.503 qpair failed and we were unable to recover it. 00:24:31.503 [2024-07-12 11:28:57.447535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.503 [2024-07-12 11:28:57.447646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.503 [2024-07-12 11:28:57.447671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.503 [2024-07-12 11:28:57.447686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.503 [2024-07-12 11:28:57.447700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.503 [2024-07-12 11:28:57.447731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.503 qpair failed and we were unable to recover it. 00:24:31.503 [2024-07-12 11:28:57.457571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.503 [2024-07-12 11:28:57.457671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.503 [2024-07-12 11:28:57.457697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.503 [2024-07-12 11:28:57.457712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.503 [2024-07-12 11:28:57.457725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.503 [2024-07-12 11:28:57.457755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.503 qpair failed and we were unable to recover it. 00:24:31.503 [2024-07-12 11:28:57.467604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.503 [2024-07-12 11:28:57.467714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.503 [2024-07-12 11:28:57.467740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.503 [2024-07-12 11:28:57.467756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.503 [2024-07-12 11:28:57.467769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.503 [2024-07-12 11:28:57.467800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.503 qpair failed and we were unable to recover it. 00:24:31.503 [2024-07-12 11:28:57.477726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.503 [2024-07-12 11:28:57.477815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.503 [2024-07-12 11:28:57.477839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.503 [2024-07-12 11:28:57.477853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.504 [2024-07-12 11:28:57.477872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.504 [2024-07-12 11:28:57.477904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.504 qpair failed and we were unable to recover it. 00:24:31.504 [2024-07-12 11:28:57.487639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.504 [2024-07-12 11:28:57.487742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.504 [2024-07-12 11:28:57.487766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.504 [2024-07-12 11:28:57.487781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.504 [2024-07-12 11:28:57.487794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.504 [2024-07-12 11:28:57.487825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.504 qpair failed and we were unable to recover it. 00:24:31.504 [2024-07-12 11:28:57.497674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.504 [2024-07-12 11:28:57.497784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.504 [2024-07-12 11:28:57.497808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.504 [2024-07-12 11:28:57.497824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.504 [2024-07-12 11:28:57.497837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.504 [2024-07-12 11:28:57.497877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.504 qpair failed and we were unable to recover it. 00:24:31.504 [2024-07-12 11:28:57.507704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.504 [2024-07-12 11:28:57.507796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.504 [2024-07-12 11:28:57.507820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.504 [2024-07-12 11:28:57.507835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.504 [2024-07-12 11:28:57.507850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.504 [2024-07-12 11:28:57.507888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.504 qpair failed and we were unable to recover it. 00:24:31.504 [2024-07-12 11:28:57.517748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.504 [2024-07-12 11:28:57.517833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.504 [2024-07-12 11:28:57.517862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.504 [2024-07-12 11:28:57.517887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.504 [2024-07-12 11:28:57.517901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.504 [2024-07-12 11:28:57.517931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.504 qpair failed and we were unable to recover it. 00:24:31.504 [2024-07-12 11:28:57.527829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.504 [2024-07-12 11:28:57.527938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.504 [2024-07-12 11:28:57.527965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.504 [2024-07-12 11:28:57.527981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.504 [2024-07-12 11:28:57.527995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.504 [2024-07-12 11:28:57.528027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.504 qpair failed and we were unable to recover it. 00:24:31.504 [2024-07-12 11:28:57.537817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.504 [2024-07-12 11:28:57.537911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.504 [2024-07-12 11:28:57.537946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.504 [2024-07-12 11:28:57.537961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.504 [2024-07-12 11:28:57.537973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.504 [2024-07-12 11:28:57.538011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.504 qpair failed and we were unable to recover it. 00:24:31.504 [2024-07-12 11:28:57.547821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.504 [2024-07-12 11:28:57.547918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.504 [2024-07-12 11:28:57.547943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.504 [2024-07-12 11:28:57.547958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.504 [2024-07-12 11:28:57.547971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.504 [2024-07-12 11:28:57.548014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.504 qpair failed and we were unable to recover it. 00:24:31.504 [2024-07-12 11:28:57.557924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.504 [2024-07-12 11:28:57.558057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.504 [2024-07-12 11:28:57.558084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.504 [2024-07-12 11:28:57.558100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.504 [2024-07-12 11:28:57.558113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.504 [2024-07-12 11:28:57.558150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.504 qpair failed and we were unable to recover it. 00:24:31.504 [2024-07-12 11:28:57.567886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.504 [2024-07-12 11:28:57.567982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.504 [2024-07-12 11:28:57.568015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.504 [2024-07-12 11:28:57.568031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.504 [2024-07-12 11:28:57.568045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.504 [2024-07-12 11:28:57.568075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.504 qpair failed and we were unable to recover it. 00:24:31.504 [2024-07-12 11:28:57.577915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.504 [2024-07-12 11:28:57.578005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.504 [2024-07-12 11:28:57.578030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.504 [2024-07-12 11:28:57.578045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.504 [2024-07-12 11:28:57.578058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.504 [2024-07-12 11:28:57.578088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.504 qpair failed and we were unable to recover it. 00:24:31.504 [2024-07-12 11:28:57.588030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.504 [2024-07-12 11:28:57.588124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.504 [2024-07-12 11:28:57.588148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.504 [2024-07-12 11:28:57.588163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.504 [2024-07-12 11:28:57.588177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.504 [2024-07-12 11:28:57.588207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.504 qpair failed and we were unable to recover it. 00:24:31.504 [2024-07-12 11:28:57.597946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.504 [2024-07-12 11:28:57.598039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.504 [2024-07-12 11:28:57.598063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.504 [2024-07-12 11:28:57.598078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.504 [2024-07-12 11:28:57.598093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.504 [2024-07-12 11:28:57.598123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.504 qpair failed and we were unable to recover it. 00:24:31.504 [2024-07-12 11:28:57.607987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.504 [2024-07-12 11:28:57.608069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.504 [2024-07-12 11:28:57.608099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.504 [2024-07-12 11:28:57.608115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.505 [2024-07-12 11:28:57.608129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.505 [2024-07-12 11:28:57.608159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.505 qpair failed and we were unable to recover it. 00:24:31.505 [2024-07-12 11:28:57.618014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.505 [2024-07-12 11:28:57.618100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.505 [2024-07-12 11:28:57.618125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.505 [2024-07-12 11:28:57.618140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.505 [2024-07-12 11:28:57.618153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.505 [2024-07-12 11:28:57.618182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.505 qpair failed and we were unable to recover it. 00:24:31.505 [2024-07-12 11:28:57.628038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.505 [2024-07-12 11:28:57.628130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.505 [2024-07-12 11:28:57.628154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.505 [2024-07-12 11:28:57.628169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.505 [2024-07-12 11:28:57.628182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.505 [2024-07-12 11:28:57.628212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.505 qpair failed and we were unable to recover it. 00:24:31.764 [2024-07-12 11:28:57.638099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.764 [2024-07-12 11:28:57.638188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.764 [2024-07-12 11:28:57.638213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.764 [2024-07-12 11:28:57.638228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.764 [2024-07-12 11:28:57.638240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.764 [2024-07-12 11:28:57.638283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.764 qpair failed and we were unable to recover it. 00:24:31.764 [2024-07-12 11:28:57.648196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.764 [2024-07-12 11:28:57.648304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.764 [2024-07-12 11:28:57.648343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.764 [2024-07-12 11:28:57.648359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.764 [2024-07-12 11:28:57.648377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.764 [2024-07-12 11:28:57.648407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.764 qpair failed and we were unable to recover it. 00:24:31.764 [2024-07-12 11:28:57.658160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.764 [2024-07-12 11:28:57.658282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.764 [2024-07-12 11:28:57.658308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.764 [2024-07-12 11:28:57.658323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.764 [2024-07-12 11:28:57.658337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.764 [2024-07-12 11:28:57.658368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.764 qpair failed and we were unable to recover it. 00:24:31.764 [2024-07-12 11:28:57.668171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.764 [2024-07-12 11:28:57.668297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.764 [2024-07-12 11:28:57.668323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.764 [2024-07-12 11:28:57.668338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.764 [2024-07-12 11:28:57.668352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.764 [2024-07-12 11:28:57.668382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.764 qpair failed and we were unable to recover it. 00:24:31.764 [2024-07-12 11:28:57.678202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.764 [2024-07-12 11:28:57.678290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.764 [2024-07-12 11:28:57.678314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.764 [2024-07-12 11:28:57.678329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.764 [2024-07-12 11:28:57.678343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.764 [2024-07-12 11:28:57.678374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.764 qpair failed and we were unable to recover it. 00:24:31.764 [2024-07-12 11:28:57.688211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.764 [2024-07-12 11:28:57.688309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.764 [2024-07-12 11:28:57.688336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.764 [2024-07-12 11:28:57.688351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.764 [2024-07-12 11:28:57.688364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.764 [2024-07-12 11:28:57.688394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.764 qpair failed and we were unable to recover it. 00:24:31.764 [2024-07-12 11:28:57.698292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.764 [2024-07-12 11:28:57.698399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.764 [2024-07-12 11:28:57.698425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.764 [2024-07-12 11:28:57.698441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.764 [2024-07-12 11:28:57.698454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.764 [2024-07-12 11:28:57.698485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.764 qpair failed and we were unable to recover it. 00:24:31.764 [2024-07-12 11:28:57.708297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.764 [2024-07-12 11:28:57.708409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.765 [2024-07-12 11:28:57.708435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.765 [2024-07-12 11:28:57.708451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.765 [2024-07-12 11:28:57.708464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.765 [2024-07-12 11:28:57.708494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.765 qpair failed and we were unable to recover it. 00:24:31.765 [2024-07-12 11:28:57.718344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.765 [2024-07-12 11:28:57.718453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.765 [2024-07-12 11:28:57.718479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.765 [2024-07-12 11:28:57.718495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.765 [2024-07-12 11:28:57.718508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.765 [2024-07-12 11:28:57.718538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.765 qpair failed and we were unable to recover it. 00:24:31.765 [2024-07-12 11:28:57.728367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.765 [2024-07-12 11:28:57.728450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.765 [2024-07-12 11:28:57.728476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.765 [2024-07-12 11:28:57.728492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.765 [2024-07-12 11:28:57.728505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.765 [2024-07-12 11:28:57.728550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.765 qpair failed and we were unable to recover it. 00:24:31.765 [2024-07-12 11:28:57.738347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.765 [2024-07-12 11:28:57.738437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.765 [2024-07-12 11:28:57.738463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.765 [2024-07-12 11:28:57.738479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.765 [2024-07-12 11:28:57.738499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.765 [2024-07-12 11:28:57.738530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.765 qpair failed and we were unable to recover it. 00:24:31.765 [2024-07-12 11:28:57.748482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.765 [2024-07-12 11:28:57.748578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.765 [2024-07-12 11:28:57.748603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.765 [2024-07-12 11:28:57.748619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.765 [2024-07-12 11:28:57.748634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.765 [2024-07-12 11:28:57.748664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.765 qpair failed and we were unable to recover it. 00:24:31.765 [2024-07-12 11:28:57.758469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.765 [2024-07-12 11:28:57.758583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.765 [2024-07-12 11:28:57.758609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.765 [2024-07-12 11:28:57.758625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.765 [2024-07-12 11:28:57.758640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.765 [2024-07-12 11:28:57.758670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.765 qpair failed and we were unable to recover it. 00:24:31.765 [2024-07-12 11:28:57.768442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.765 [2024-07-12 11:28:57.768525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.765 [2024-07-12 11:28:57.768551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.765 [2024-07-12 11:28:57.768566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.765 [2024-07-12 11:28:57.768580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.765 [2024-07-12 11:28:57.768611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.765 qpair failed and we were unable to recover it. 00:24:31.765 [2024-07-12 11:28:57.778491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.765 [2024-07-12 11:28:57.778612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.765 [2024-07-12 11:28:57.778638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.765 [2024-07-12 11:28:57.778654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.765 [2024-07-12 11:28:57.778667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.765 [2024-07-12 11:28:57.778697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.765 qpair failed and we were unable to recover it. 00:24:31.765 [2024-07-12 11:28:57.788518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.765 [2024-07-12 11:28:57.788636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.765 [2024-07-12 11:28:57.788662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.765 [2024-07-12 11:28:57.788678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.765 [2024-07-12 11:28:57.788694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.765 [2024-07-12 11:28:57.788724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.765 qpair failed and we were unable to recover it. 00:24:31.765 [2024-07-12 11:28:57.798529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.765 [2024-07-12 11:28:57.798616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.765 [2024-07-12 11:28:57.798642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.765 [2024-07-12 11:28:57.798657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.765 [2024-07-12 11:28:57.798671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.765 [2024-07-12 11:28:57.798701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.765 qpair failed and we were unable to recover it. 00:24:31.765 [2024-07-12 11:28:57.808558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.765 [2024-07-12 11:28:57.808654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.765 [2024-07-12 11:28:57.808680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.765 [2024-07-12 11:28:57.808696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.765 [2024-07-12 11:28:57.808711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.765 [2024-07-12 11:28:57.808741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.765 qpair failed and we were unable to recover it. 00:24:31.765 [2024-07-12 11:28:57.818697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.765 [2024-07-12 11:28:57.818828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.765 [2024-07-12 11:28:57.818854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.765 [2024-07-12 11:28:57.818878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.765 [2024-07-12 11:28:57.818893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.765 [2024-07-12 11:28:57.818924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.765 qpair failed and we were unable to recover it. 00:24:31.765 [2024-07-12 11:28:57.828659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.765 [2024-07-12 11:28:57.828768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.765 [2024-07-12 11:28:57.828794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.765 [2024-07-12 11:28:57.828816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.765 [2024-07-12 11:28:57.828832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.765 [2024-07-12 11:28:57.828862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.765 qpair failed and we were unable to recover it. 00:24:31.765 [2024-07-12 11:28:57.838650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.766 [2024-07-12 11:28:57.838741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.766 [2024-07-12 11:28:57.838768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.766 [2024-07-12 11:28:57.838783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.766 [2024-07-12 11:28:57.838797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.766 [2024-07-12 11:28:57.838827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.766 qpair failed and we were unable to recover it. 00:24:31.766 [2024-07-12 11:28:57.848660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.766 [2024-07-12 11:28:57.848742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.766 [2024-07-12 11:28:57.848768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.766 [2024-07-12 11:28:57.848784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.766 [2024-07-12 11:28:57.848798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.766 [2024-07-12 11:28:57.848829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.766 qpair failed and we were unable to recover it. 00:24:31.766 [2024-07-12 11:28:57.858689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.766 [2024-07-12 11:28:57.858772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.766 [2024-07-12 11:28:57.858799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.766 [2024-07-12 11:28:57.858814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.766 [2024-07-12 11:28:57.858829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.766 [2024-07-12 11:28:57.858859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.766 qpair failed and we were unable to recover it. 00:24:31.766 [2024-07-12 11:28:57.868730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.766 [2024-07-12 11:28:57.868843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.766 [2024-07-12 11:28:57.868876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.766 [2024-07-12 11:28:57.868894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.766 [2024-07-12 11:28:57.868909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.766 [2024-07-12 11:28:57.868939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.766 qpair failed and we were unable to recover it. 00:24:31.766 [2024-07-12 11:28:57.878751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.766 [2024-07-12 11:28:57.878839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.766 [2024-07-12 11:28:57.878874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.766 [2024-07-12 11:28:57.878892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.766 [2024-07-12 11:28:57.878906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.766 [2024-07-12 11:28:57.878937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.766 qpair failed and we were unable to recover it. 00:24:31.766 [2024-07-12 11:28:57.888887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.766 [2024-07-12 11:28:57.889014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.766 [2024-07-12 11:28:57.889040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.766 [2024-07-12 11:28:57.889055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.766 [2024-07-12 11:28:57.889070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:31.766 [2024-07-12 11:28:57.889100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.766 qpair failed and we were unable to recover it. 00:24:32.025 [2024-07-12 11:28:57.898859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.025 [2024-07-12 11:28:57.898954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.025 [2024-07-12 11:28:57.898979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.025 [2024-07-12 11:28:57.898995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.025 [2024-07-12 11:28:57.899010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.025 [2024-07-12 11:28:57.899040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.025 qpair failed and we were unable to recover it. 00:24:32.025 [2024-07-12 11:28:57.908926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.025 [2024-07-12 11:28:57.909022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.025 [2024-07-12 11:28:57.909049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.025 [2024-07-12 11:28:57.909064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.025 [2024-07-12 11:28:57.909077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.025 [2024-07-12 11:28:57.909108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.025 qpair failed and we were unable to recover it. 00:24:32.025 [2024-07-12 11:28:57.918876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.025 [2024-07-12 11:28:57.918965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.025 [2024-07-12 11:28:57.918996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.025 [2024-07-12 11:28:57.919013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.025 [2024-07-12 11:28:57.919028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.025 [2024-07-12 11:28:57.919058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.025 qpair failed and we were unable to recover it. 00:24:32.025 [2024-07-12 11:28:57.928899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.025 [2024-07-12 11:28:57.929017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.025 [2024-07-12 11:28:57.929043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.025 [2024-07-12 11:28:57.929059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.025 [2024-07-12 11:28:57.929072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.025 [2024-07-12 11:28:57.929103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.025 qpair failed and we were unable to recover it. 00:24:32.025 [2024-07-12 11:28:57.938924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.025 [2024-07-12 11:28:57.939007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.025 [2024-07-12 11:28:57.939033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.025 [2024-07-12 11:28:57.939048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.025 [2024-07-12 11:28:57.939061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.025 [2024-07-12 11:28:57.939092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.025 qpair failed and we were unable to recover it. 00:24:32.025 [2024-07-12 11:28:57.948968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.025 [2024-07-12 11:28:57.949062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.025 [2024-07-12 11:28:57.949088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.025 [2024-07-12 11:28:57.949103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.025 [2024-07-12 11:28:57.949117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.025 [2024-07-12 11:28:57.949147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.025 qpair failed and we were unable to recover it. 00:24:32.025 [2024-07-12 11:28:57.958978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.025 [2024-07-12 11:28:57.959063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.025 [2024-07-12 11:28:57.959089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.025 [2024-07-12 11:28:57.959104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.025 [2024-07-12 11:28:57.959117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.025 [2024-07-12 11:28:57.959153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.025 qpair failed and we were unable to recover it. 00:24:32.026 [2024-07-12 11:28:57.969004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.026 [2024-07-12 11:28:57.969092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.026 [2024-07-12 11:28:57.969118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.026 [2024-07-12 11:28:57.969134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.026 [2024-07-12 11:28:57.969148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.026 [2024-07-12 11:28:57.969178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.026 qpair failed and we were unable to recover it. 00:24:32.026 [2024-07-12 11:28:57.979071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.026 [2024-07-12 11:28:57.979152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.026 [2024-07-12 11:28:57.979178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.026 [2024-07-12 11:28:57.979194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.026 [2024-07-12 11:28:57.979208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.026 [2024-07-12 11:28:57.979238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.026 qpair failed and we were unable to recover it. 00:24:32.026 [2024-07-12 11:28:57.989120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.026 [2024-07-12 11:28:57.989246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.026 [2024-07-12 11:28:57.989272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.026 [2024-07-12 11:28:57.989287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.026 [2024-07-12 11:28:57.989301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.026 [2024-07-12 11:28:57.989346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.026 qpair failed and we were unable to recover it. 00:24:32.026 [2024-07-12 11:28:57.999133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.026 [2024-07-12 11:28:57.999222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.026 [2024-07-12 11:28:57.999248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.026 [2024-07-12 11:28:57.999264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.026 [2024-07-12 11:28:57.999279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.026 [2024-07-12 11:28:57.999309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.026 qpair failed and we were unable to recover it. 00:24:32.026 [2024-07-12 11:28:58.009190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.026 [2024-07-12 11:28:58.009316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.026 [2024-07-12 11:28:58.009350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.026 [2024-07-12 11:28:58.009367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.026 [2024-07-12 11:28:58.009381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.026 [2024-07-12 11:28:58.009411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.026 qpair failed and we were unable to recover it. 00:24:32.026 [2024-07-12 11:28:58.019190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.026 [2024-07-12 11:28:58.019274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.026 [2024-07-12 11:28:58.019300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.026 [2024-07-12 11:28:58.019316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.026 [2024-07-12 11:28:58.019330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.026 [2024-07-12 11:28:58.019361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.026 qpair failed and we were unable to recover it. 00:24:32.026 [2024-07-12 11:28:58.029190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.026 [2024-07-12 11:28:58.029283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.026 [2024-07-12 11:28:58.029309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.026 [2024-07-12 11:28:58.029324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.026 [2024-07-12 11:28:58.029337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.026 [2024-07-12 11:28:58.029369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.026 qpair failed and we were unable to recover it. 00:24:32.026 [2024-07-12 11:28:58.039233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.026 [2024-07-12 11:28:58.039319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.026 [2024-07-12 11:28:58.039344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.026 [2024-07-12 11:28:58.039359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.026 [2024-07-12 11:28:58.039374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.026 [2024-07-12 11:28:58.039404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.026 qpair failed and we were unable to recover it. 00:24:32.026 [2024-07-12 11:28:58.049333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.026 [2024-07-12 11:28:58.049456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.026 [2024-07-12 11:28:58.049482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.026 [2024-07-12 11:28:58.049498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.026 [2024-07-12 11:28:58.049518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.026 [2024-07-12 11:28:58.049549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.026 qpair failed and we were unable to recover it. 00:24:32.026 [2024-07-12 11:28:58.059335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.026 [2024-07-12 11:28:58.059417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.026 [2024-07-12 11:28:58.059442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.026 [2024-07-12 11:28:58.059458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.026 [2024-07-12 11:28:58.059472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.026 [2024-07-12 11:28:58.059503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.026 qpair failed and we were unable to recover it. 00:24:32.026 [2024-07-12 11:28:58.069299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.026 [2024-07-12 11:28:58.069392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.026 [2024-07-12 11:28:58.069418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.027 [2024-07-12 11:28:58.069434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.027 [2024-07-12 11:28:58.069447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.027 [2024-07-12 11:28:58.069478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.027 qpair failed and we were unable to recover it. 00:24:32.027 [2024-07-12 11:28:58.079329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.027 [2024-07-12 11:28:58.079419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.027 [2024-07-12 11:28:58.079446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.027 [2024-07-12 11:28:58.079461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.027 [2024-07-12 11:28:58.079476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.027 [2024-07-12 11:28:58.079506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.027 qpair failed and we were unable to recover it. 00:24:32.027 [2024-07-12 11:28:58.089388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.027 [2024-07-12 11:28:58.089472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.027 [2024-07-12 11:28:58.089499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.027 [2024-07-12 11:28:58.089514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.027 [2024-07-12 11:28:58.089529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.027 [2024-07-12 11:28:58.089571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.027 qpair failed and we were unable to recover it. 00:24:32.027 [2024-07-12 11:28:58.099356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.027 [2024-07-12 11:28:58.099491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.027 [2024-07-12 11:28:58.099518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.027 [2024-07-12 11:28:58.099533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.027 [2024-07-12 11:28:58.099546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.027 [2024-07-12 11:28:58.099578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.027 qpair failed and we were unable to recover it. 00:24:32.027 [2024-07-12 11:28:58.109404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.027 [2024-07-12 11:28:58.109494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.027 [2024-07-12 11:28:58.109519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.027 [2024-07-12 11:28:58.109534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.027 [2024-07-12 11:28:58.109548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.027 [2024-07-12 11:28:58.109579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.027 qpair failed and we were unable to recover it. 00:24:32.027 [2024-07-12 11:28:58.119423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.027 [2024-07-12 11:28:58.119512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.027 [2024-07-12 11:28:58.119537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.027 [2024-07-12 11:28:58.119553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.027 [2024-07-12 11:28:58.119567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.027 [2024-07-12 11:28:58.119597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.027 qpair failed and we were unable to recover it. 00:24:32.027 [2024-07-12 11:28:58.129470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.027 [2024-07-12 11:28:58.129556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.027 [2024-07-12 11:28:58.129585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.027 [2024-07-12 11:28:58.129601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.027 [2024-07-12 11:28:58.129616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.027 [2024-07-12 11:28:58.129646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.027 qpair failed and we were unable to recover it. 00:24:32.027 [2024-07-12 11:28:58.139499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.027 [2024-07-12 11:28:58.139578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.027 [2024-07-12 11:28:58.139604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.027 [2024-07-12 11:28:58.139620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.027 [2024-07-12 11:28:58.139639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.027 [2024-07-12 11:28:58.139670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.027 qpair failed and we were unable to recover it. 00:24:32.027 [2024-07-12 11:28:58.149565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.027 [2024-07-12 11:28:58.149670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.027 [2024-07-12 11:28:58.149696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.027 [2024-07-12 11:28:58.149712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.027 [2024-07-12 11:28:58.149725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.027 [2024-07-12 11:28:58.149768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.027 qpair failed and we were unable to recover it. 00:24:32.286 [2024-07-12 11:28:58.159553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.286 [2024-07-12 11:28:58.159651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.286 [2024-07-12 11:28:58.159681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.286 [2024-07-12 11:28:58.159698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.286 [2024-07-12 11:28:58.159712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.286 [2024-07-12 11:28:58.159743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.286 qpair failed and we were unable to recover it. 00:24:32.286 [2024-07-12 11:28:58.169583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.286 [2024-07-12 11:28:58.169686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.286 [2024-07-12 11:28:58.169713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.286 [2024-07-12 11:28:58.169729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.286 [2024-07-12 11:28:58.169744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.287 [2024-07-12 11:28:58.169775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.287 qpair failed and we were unable to recover it. 00:24:32.287 [2024-07-12 11:28:58.179749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.287 [2024-07-12 11:28:58.179848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.287 [2024-07-12 11:28:58.179883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.287 [2024-07-12 11:28:58.179900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.287 [2024-07-12 11:28:58.179914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.287 [2024-07-12 11:28:58.179944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.287 qpair failed and we were unable to recover it. 00:24:32.287 [2024-07-12 11:28:58.189694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.287 [2024-07-12 11:28:58.189786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.287 [2024-07-12 11:28:58.189815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.287 [2024-07-12 11:28:58.189831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.287 [2024-07-12 11:28:58.189846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.287 [2024-07-12 11:28:58.189884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.287 qpair failed and we were unable to recover it. 00:24:32.287 [2024-07-12 11:28:58.199732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.287 [2024-07-12 11:28:58.199821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.287 [2024-07-12 11:28:58.199848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.287 [2024-07-12 11:28:58.199863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.287 [2024-07-12 11:28:58.199887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.287 [2024-07-12 11:28:58.199918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.287 qpair failed and we were unable to recover it. 00:24:32.287 [2024-07-12 11:28:58.209740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.287 [2024-07-12 11:28:58.209824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.287 [2024-07-12 11:28:58.209850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.287 [2024-07-12 11:28:58.209872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.287 [2024-07-12 11:28:58.209888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.287 [2024-07-12 11:28:58.209920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.287 qpair failed and we were unable to recover it. 00:24:32.287 [2024-07-12 11:28:58.219752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.287 [2024-07-12 11:28:58.219851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.287 [2024-07-12 11:28:58.219884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.287 [2024-07-12 11:28:58.219900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.287 [2024-07-12 11:28:58.219915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.287 [2024-07-12 11:28:58.219946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.287 qpair failed and we were unable to recover it. 00:24:32.287 [2024-07-12 11:28:58.229756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.287 [2024-07-12 11:28:58.229841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.287 [2024-07-12 11:28:58.229873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.287 [2024-07-12 11:28:58.229897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.287 [2024-07-12 11:28:58.229913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.287 [2024-07-12 11:28:58.229957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.287 qpair failed and we were unable to recover it. 00:24:32.287 [2024-07-12 11:28:58.239808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.287 [2024-07-12 11:28:58.239916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.287 [2024-07-12 11:28:58.239943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.287 [2024-07-12 11:28:58.239959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.287 [2024-07-12 11:28:58.239972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.287 [2024-07-12 11:28:58.240003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.287 qpair failed and we were unable to recover it. 00:24:32.287 [2024-07-12 11:28:58.249787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.287 [2024-07-12 11:28:58.249876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.287 [2024-07-12 11:28:58.249903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.287 [2024-07-12 11:28:58.249918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.287 [2024-07-12 11:28:58.249932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.287 [2024-07-12 11:28:58.249962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.287 qpair failed and we were unable to recover it. 00:24:32.287 [2024-07-12 11:28:58.259824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.287 [2024-07-12 11:28:58.259922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.287 [2024-07-12 11:28:58.259949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.287 [2024-07-12 11:28:58.259965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.287 [2024-07-12 11:28:58.259979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.287 [2024-07-12 11:28:58.260010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.287 qpair failed and we were unable to recover it. 00:24:32.287 [2024-07-12 11:28:58.269875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.287 [2024-07-12 11:28:58.270016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.287 [2024-07-12 11:28:58.270042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.287 [2024-07-12 11:28:58.270058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.287 [2024-07-12 11:28:58.270071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.287 [2024-07-12 11:28:58.270102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.287 qpair failed and we were unable to recover it. 00:24:32.287 [2024-07-12 11:28:58.279887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.287 [2024-07-12 11:28:58.280003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.287 [2024-07-12 11:28:58.280030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.287 [2024-07-12 11:28:58.280045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.287 [2024-07-12 11:28:58.280058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.287 [2024-07-12 11:28:58.280090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.287 qpair failed and we were unable to recover it. 00:24:32.287 [2024-07-12 11:28:58.289908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.287 [2024-07-12 11:28:58.289990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.287 [2024-07-12 11:28:58.290016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.287 [2024-07-12 11:28:58.290031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.287 [2024-07-12 11:28:58.290046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.287 [2024-07-12 11:28:58.290077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.287 qpair failed and we were unable to recover it. 00:24:32.287 [2024-07-12 11:28:58.299980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.287 [2024-07-12 11:28:58.300091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.287 [2024-07-12 11:28:58.300118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.287 [2024-07-12 11:28:58.300133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.288 [2024-07-12 11:28:58.300147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.288 [2024-07-12 11:28:58.300177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.288 qpair failed and we were unable to recover it. 00:24:32.288 [2024-07-12 11:28:58.310010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.288 [2024-07-12 11:28:58.310109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.288 [2024-07-12 11:28:58.310139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.288 [2024-07-12 11:28:58.310156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.288 [2024-07-12 11:28:58.310171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.288 [2024-07-12 11:28:58.310202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.288 qpair failed and we were unable to recover it. 00:24:32.288 [2024-07-12 11:28:58.319997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.288 [2024-07-12 11:28:58.320090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.288 [2024-07-12 11:28:58.320122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.288 [2024-07-12 11:28:58.320139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.288 [2024-07-12 11:28:58.320152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.288 [2024-07-12 11:28:58.320182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.288 qpair failed and we were unable to recover it. 00:24:32.288 [2024-07-12 11:28:58.330030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.288 [2024-07-12 11:28:58.330112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.288 [2024-07-12 11:28:58.330138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.288 [2024-07-12 11:28:58.330154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.288 [2024-07-12 11:28:58.330170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.288 [2024-07-12 11:28:58.330212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.288 qpair failed and we were unable to recover it. 00:24:32.288 [2024-07-12 11:28:58.340104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.288 [2024-07-12 11:28:58.340234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.288 [2024-07-12 11:28:58.340260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.288 [2024-07-12 11:28:58.340276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.288 [2024-07-12 11:28:58.340288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.288 [2024-07-12 11:28:58.340334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.288 qpair failed and we were unable to recover it. 00:24:32.288 [2024-07-12 11:28:58.350095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.288 [2024-07-12 11:28:58.350228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.288 [2024-07-12 11:28:58.350254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.288 [2024-07-12 11:28:58.350269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.288 [2024-07-12 11:28:58.350283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.288 [2024-07-12 11:28:58.350313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.288 qpair failed and we were unable to recover it. 00:24:32.288 [2024-07-12 11:28:58.360114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.288 [2024-07-12 11:28:58.360202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.288 [2024-07-12 11:28:58.360227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.288 [2024-07-12 11:28:58.360243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.288 [2024-07-12 11:28:58.360256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.288 [2024-07-12 11:28:58.360292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.288 qpair failed and we were unable to recover it. 00:24:32.288 [2024-07-12 11:28:58.370235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.288 [2024-07-12 11:28:58.370313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.288 [2024-07-12 11:28:58.370338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.288 [2024-07-12 11:28:58.370353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.288 [2024-07-12 11:28:58.370365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.288 [2024-07-12 11:28:58.370394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.288 qpair failed and we were unable to recover it. 00:24:32.288 [2024-07-12 11:28:58.380160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.288 [2024-07-12 11:28:58.380251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.288 [2024-07-12 11:28:58.380281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.288 [2024-07-12 11:28:58.380298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.288 [2024-07-12 11:28:58.380312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.288 [2024-07-12 11:28:58.380343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.288 qpair failed and we were unable to recover it. 00:24:32.288 [2024-07-12 11:28:58.390236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.288 [2024-07-12 11:28:58.390363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.288 [2024-07-12 11:28:58.390390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.288 [2024-07-12 11:28:58.390406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.288 [2024-07-12 11:28:58.390419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.288 [2024-07-12 11:28:58.390465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.288 qpair failed and we were unable to recover it. 00:24:32.288 [2024-07-12 11:28:58.400255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.288 [2024-07-12 11:28:58.400352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.288 [2024-07-12 11:28:58.400379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.288 [2024-07-12 11:28:58.400394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.288 [2024-07-12 11:28:58.400408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.288 [2024-07-12 11:28:58.400438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.288 qpair failed and we were unable to recover it. 00:24:32.288 [2024-07-12 11:28:58.410250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.288 [2024-07-12 11:28:58.410387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.288 [2024-07-12 11:28:58.410419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.288 [2024-07-12 11:28:58.410435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.288 [2024-07-12 11:28:58.410450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.288 [2024-07-12 11:28:58.410481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.288 qpair failed and we were unable to recover it. 00:24:32.547 [2024-07-12 11:28:58.420282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.547 [2024-07-12 11:28:58.420411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.547 [2024-07-12 11:28:58.420438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.547 [2024-07-12 11:28:58.420453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.547 [2024-07-12 11:28:58.420468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.547 [2024-07-12 11:28:58.420498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.547 qpair failed and we were unable to recover it. 00:24:32.547 [2024-07-12 11:28:58.430330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.547 [2024-07-12 11:28:58.430449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.547 [2024-07-12 11:28:58.430475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.547 [2024-07-12 11:28:58.430491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.548 [2024-07-12 11:28:58.430505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.548 [2024-07-12 11:28:58.430536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.548 qpair failed and we were unable to recover it. 00:24:32.548 [2024-07-12 11:28:58.440351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.548 [2024-07-12 11:28:58.440440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.548 [2024-07-12 11:28:58.440466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.548 [2024-07-12 11:28:58.440482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.548 [2024-07-12 11:28:58.440496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.548 [2024-07-12 11:28:58.440527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.548 qpair failed and we were unable to recover it. 00:24:32.548 [2024-07-12 11:28:58.450426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.548 [2024-07-12 11:28:58.450511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.548 [2024-07-12 11:28:58.450537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.548 [2024-07-12 11:28:58.450552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.548 [2024-07-12 11:28:58.450567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.548 [2024-07-12 11:28:58.450603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.548 qpair failed and we were unable to recover it. 00:24:32.548 [2024-07-12 11:28:58.460439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.548 [2024-07-12 11:28:58.460523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.548 [2024-07-12 11:28:58.460549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.548 [2024-07-12 11:28:58.460565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.548 [2024-07-12 11:28:58.460579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.548 [2024-07-12 11:28:58.460610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.548 qpair failed and we were unable to recover it. 00:24:32.548 [2024-07-12 11:28:58.470440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.548 [2024-07-12 11:28:58.470566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.548 [2024-07-12 11:28:58.470592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.548 [2024-07-12 11:28:58.470608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.548 [2024-07-12 11:28:58.470623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.548 [2024-07-12 11:28:58.470652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.548 qpair failed and we were unable to recover it. 00:24:32.548 [2024-07-12 11:28:58.480456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.548 [2024-07-12 11:28:58.480586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.548 [2024-07-12 11:28:58.480612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.548 [2024-07-12 11:28:58.480628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.548 [2024-07-12 11:28:58.480642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.548 [2024-07-12 11:28:58.480673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.548 qpair failed and we were unable to recover it. 00:24:32.548 [2024-07-12 11:28:58.490602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.548 [2024-07-12 11:28:58.490689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.548 [2024-07-12 11:28:58.490716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.548 [2024-07-12 11:28:58.490731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.548 [2024-07-12 11:28:58.490746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.548 [2024-07-12 11:28:58.490776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.548 qpair failed and we were unable to recover it. 00:24:32.548 [2024-07-12 11:28:58.500506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.548 [2024-07-12 11:28:58.500597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.548 [2024-07-12 11:28:58.500625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.548 [2024-07-12 11:28:58.500640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.548 [2024-07-12 11:28:58.500654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.548 [2024-07-12 11:28:58.500684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.548 qpair failed and we were unable to recover it. 00:24:32.548 [2024-07-12 11:28:58.510574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.548 [2024-07-12 11:28:58.510662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.548 [2024-07-12 11:28:58.510688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.548 [2024-07-12 11:28:58.510704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.548 [2024-07-12 11:28:58.510719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.548 [2024-07-12 11:28:58.510749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.548 qpair failed and we were unable to recover it. 00:24:32.548 [2024-07-12 11:28:58.520606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.548 [2024-07-12 11:28:58.520699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.548 [2024-07-12 11:28:58.520725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.548 [2024-07-12 11:28:58.520740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.548 [2024-07-12 11:28:58.520753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.548 [2024-07-12 11:28:58.520784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.548 qpair failed and we were unable to recover it. 00:24:32.548 [2024-07-12 11:28:58.530599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.548 [2024-07-12 11:28:58.530689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.548 [2024-07-12 11:28:58.530719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.548 [2024-07-12 11:28:58.530735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.548 [2024-07-12 11:28:58.530749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.548 [2024-07-12 11:28:58.530779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.548 qpair failed and we were unable to recover it. 00:24:32.548 [2024-07-12 11:28:58.540720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.548 [2024-07-12 11:28:58.540830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.548 [2024-07-12 11:28:58.540856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.548 [2024-07-12 11:28:58.540880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.548 [2024-07-12 11:28:58.540928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.548 [2024-07-12 11:28:58.540960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.548 qpair failed and we were unable to recover it. 00:24:32.548 [2024-07-12 11:28:58.550745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.548 [2024-07-12 11:28:58.550837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.548 [2024-07-12 11:28:58.550863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.548 [2024-07-12 11:28:58.550895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.548 [2024-07-12 11:28:58.550909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.548 [2024-07-12 11:28:58.550940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.548 qpair failed and we were unable to recover it. 00:24:32.548 [2024-07-12 11:28:58.560686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.548 [2024-07-12 11:28:58.560793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.548 [2024-07-12 11:28:58.560819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.548 [2024-07-12 11:28:58.560835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.548 [2024-07-12 11:28:58.560848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.548 [2024-07-12 11:28:58.560886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.548 qpair failed and we were unable to recover it. 00:24:32.548 [2024-07-12 11:28:58.570733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.548 [2024-07-12 11:28:58.570817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.548 [2024-07-12 11:28:58.570843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.548 [2024-07-12 11:28:58.570859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.548 [2024-07-12 11:28:58.570883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.549 [2024-07-12 11:28:58.570915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.549 qpair failed and we were unable to recover it. 00:24:32.549 [2024-07-12 11:28:58.580820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.549 [2024-07-12 11:28:58.580924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.549 [2024-07-12 11:28:58.580951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.549 [2024-07-12 11:28:58.580966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.549 [2024-07-12 11:28:58.580980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.549 [2024-07-12 11:28:58.581011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.549 qpair failed and we were unable to recover it. 00:24:32.549 [2024-07-12 11:28:58.590781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.549 [2024-07-12 11:28:58.590879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.549 [2024-07-12 11:28:58.590905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.549 [2024-07-12 11:28:58.590921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.549 [2024-07-12 11:28:58.590934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.549 [2024-07-12 11:28:58.590964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.549 qpair failed and we were unable to recover it. 00:24:32.549 [2024-07-12 11:28:58.600784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.549 [2024-07-12 11:28:58.600902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.549 [2024-07-12 11:28:58.600929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.549 [2024-07-12 11:28:58.600945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.549 [2024-07-12 11:28:58.600958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.549 [2024-07-12 11:28:58.600988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.549 qpair failed and we were unable to recover it. 00:24:32.549 [2024-07-12 11:28:58.610815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.549 [2024-07-12 11:28:58.610898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.549 [2024-07-12 11:28:58.610925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.549 [2024-07-12 11:28:58.610940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.549 [2024-07-12 11:28:58.610953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.549 [2024-07-12 11:28:58.610984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.549 qpair failed and we were unable to recover it. 00:24:32.549 [2024-07-12 11:28:58.620846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.549 [2024-07-12 11:28:58.620965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.549 [2024-07-12 11:28:58.620990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.549 [2024-07-12 11:28:58.621006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.549 [2024-07-12 11:28:58.621019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.549 [2024-07-12 11:28:58.621050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.549 qpair failed and we were unable to recover it. 00:24:32.549 [2024-07-12 11:28:58.630931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.549 [2024-07-12 11:28:58.631023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.549 [2024-07-12 11:28:58.631049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.549 [2024-07-12 11:28:58.631069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.549 [2024-07-12 11:28:58.631085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.549 [2024-07-12 11:28:58.631116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.549 qpair failed and we were unable to recover it. 00:24:32.549 [2024-07-12 11:28:58.640939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.549 [2024-07-12 11:28:58.641042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.549 [2024-07-12 11:28:58.641068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.549 [2024-07-12 11:28:58.641084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.549 [2024-07-12 11:28:58.641097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.549 [2024-07-12 11:28:58.641139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.549 qpair failed and we were unable to recover it. 00:24:32.549 [2024-07-12 11:28:58.650948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.549 [2024-07-12 11:28:58.651033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.549 [2024-07-12 11:28:58.651059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.549 [2024-07-12 11:28:58.651074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.549 [2024-07-12 11:28:58.651088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.549 [2024-07-12 11:28:58.651117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.549 qpair failed and we were unable to recover it. 00:24:32.549 [2024-07-12 11:28:58.660993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.549 [2024-07-12 11:28:58.661075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.549 [2024-07-12 11:28:58.661101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.549 [2024-07-12 11:28:58.661117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.549 [2024-07-12 11:28:58.661129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.549 [2024-07-12 11:28:58.661159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.549 qpair failed and we were unable to recover it. 00:24:32.549 [2024-07-12 11:28:58.671021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.549 [2024-07-12 11:28:58.671116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.549 [2024-07-12 11:28:58.671141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.549 [2024-07-12 11:28:58.671157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.549 [2024-07-12 11:28:58.671170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.549 [2024-07-12 11:28:58.671199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.549 qpair failed and we were unable to recover it. 00:24:32.808 [2024-07-12 11:28:58.681070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.808 [2024-07-12 11:28:58.681166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.808 [2024-07-12 11:28:58.681194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.808 [2024-07-12 11:28:58.681211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.808 [2024-07-12 11:28:58.681224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.808 [2024-07-12 11:28:58.681256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.808 qpair failed and we were unable to recover it. 00:24:32.808 [2024-07-12 11:28:58.691161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.808 [2024-07-12 11:28:58.691246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.808 [2024-07-12 11:28:58.691271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.808 [2024-07-12 11:28:58.691286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.808 [2024-07-12 11:28:58.691299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.808 [2024-07-12 11:28:58.691329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.808 qpair failed and we were unable to recover it. 00:24:32.808 [2024-07-12 11:28:58.701113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.808 [2024-07-12 11:28:58.701205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.808 [2024-07-12 11:28:58.701229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.808 [2024-07-12 11:28:58.701244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.808 [2024-07-12 11:28:58.701257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.808 [2024-07-12 11:28:58.701287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.808 qpair failed and we were unable to recover it. 00:24:32.808 [2024-07-12 11:28:58.711234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.808 [2024-07-12 11:28:58.711336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.808 [2024-07-12 11:28:58.711361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.808 [2024-07-12 11:28:58.711376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.808 [2024-07-12 11:28:58.711390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.808 [2024-07-12 11:28:58.711421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.808 qpair failed and we were unable to recover it. 00:24:32.808 [2024-07-12 11:28:58.721202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.808 [2024-07-12 11:28:58.721290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.808 [2024-07-12 11:28:58.721318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.808 [2024-07-12 11:28:58.721341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.808 [2024-07-12 11:28:58.721355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.808 [2024-07-12 11:28:58.721387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.808 qpair failed and we were unable to recover it. 00:24:32.808 [2024-07-12 11:28:58.731188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.808 [2024-07-12 11:28:58.731281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.808 [2024-07-12 11:28:58.731307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.808 [2024-07-12 11:28:58.731322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.808 [2024-07-12 11:28:58.731335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.808 [2024-07-12 11:28:58.731366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.808 qpair failed and we were unable to recover it. 00:24:32.808 [2024-07-12 11:28:58.741246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.808 [2024-07-12 11:28:58.741340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.808 [2024-07-12 11:28:58.741365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.808 [2024-07-12 11:28:58.741381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.808 [2024-07-12 11:28:58.741394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.808 [2024-07-12 11:28:58.741424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.808 qpair failed and we were unable to recover it. 00:24:32.808 [2024-07-12 11:28:58.751304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.808 [2024-07-12 11:28:58.751413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.808 [2024-07-12 11:28:58.751438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.808 [2024-07-12 11:28:58.751453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.808 [2024-07-12 11:28:58.751466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.808 [2024-07-12 11:28:58.751496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.808 qpair failed and we were unable to recover it. 00:24:32.808 [2024-07-12 11:28:58.761361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.808 [2024-07-12 11:28:58.761477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.808 [2024-07-12 11:28:58.761505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.808 [2024-07-12 11:28:58.761521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.808 [2024-07-12 11:28:58.761534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.808 [2024-07-12 11:28:58.761565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.808 qpair failed and we were unable to recover it. 00:24:32.808 [2024-07-12 11:28:58.771346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.808 [2024-07-12 11:28:58.771434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.808 [2024-07-12 11:28:58.771459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.808 [2024-07-12 11:28:58.771474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.808 [2024-07-12 11:28:58.771487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.808 [2024-07-12 11:28:58.771516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.808 qpair failed and we were unable to recover it. 00:24:32.808 [2024-07-12 11:28:58.781336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.808 [2024-07-12 11:28:58.781423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.808 [2024-07-12 11:28:58.781448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.808 [2024-07-12 11:28:58.781463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.808 [2024-07-12 11:28:58.781476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.808 [2024-07-12 11:28:58.781505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.808 qpair failed and we were unable to recover it. 00:24:32.808 [2024-07-12 11:28:58.791386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.808 [2024-07-12 11:28:58.791511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.808 [2024-07-12 11:28:58.791536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.808 [2024-07-12 11:28:58.791552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.808 [2024-07-12 11:28:58.791566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.808 [2024-07-12 11:28:58.791610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.808 qpair failed and we were unable to recover it. 00:24:32.808 [2024-07-12 11:28:58.801423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.808 [2024-07-12 11:28:58.801511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.808 [2024-07-12 11:28:58.801536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.808 [2024-07-12 11:28:58.801551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.808 [2024-07-12 11:28:58.801564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.808 [2024-07-12 11:28:58.801594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.808 qpair failed and we were unable to recover it. 00:24:32.808 [2024-07-12 11:28:58.811459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.809 [2024-07-12 11:28:58.811547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.809 [2024-07-12 11:28:58.811577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.809 [2024-07-12 11:28:58.811592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.809 [2024-07-12 11:28:58.811605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.809 [2024-07-12 11:28:58.811635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.809 qpair failed and we were unable to recover it. 00:24:32.809 [2024-07-12 11:28:58.821475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.809 [2024-07-12 11:28:58.821566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.809 [2024-07-12 11:28:58.821594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.809 [2024-07-12 11:28:58.821610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.809 [2024-07-12 11:28:58.821625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.809 [2024-07-12 11:28:58.821656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.809 qpair failed and we were unable to recover it. 00:24:32.809 [2024-07-12 11:28:58.831616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.809 [2024-07-12 11:28:58.831740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.809 [2024-07-12 11:28:58.831764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.809 [2024-07-12 11:28:58.831780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.809 [2024-07-12 11:28:58.831793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.809 [2024-07-12 11:28:58.831825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.809 qpair failed and we were unable to recover it. 00:24:32.809 [2024-07-12 11:28:58.841526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.809 [2024-07-12 11:28:58.841619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.809 [2024-07-12 11:28:58.841645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.809 [2024-07-12 11:28:58.841660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.809 [2024-07-12 11:28:58.841673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.809 [2024-07-12 11:28:58.841703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.809 qpair failed and we were unable to recover it. 00:24:32.809 [2024-07-12 11:28:58.851587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.809 [2024-07-12 11:28:58.851678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.809 [2024-07-12 11:28:58.851704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.809 [2024-07-12 11:28:58.851722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.809 [2024-07-12 11:28:58.851736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.809 [2024-07-12 11:28:58.851772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.809 qpair failed and we were unable to recover it. 00:24:32.809 [2024-07-12 11:28:58.861598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.809 [2024-07-12 11:28:58.861726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.809 [2024-07-12 11:28:58.861753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.809 [2024-07-12 11:28:58.861768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.809 [2024-07-12 11:28:58.861782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.809 [2024-07-12 11:28:58.861813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.809 qpair failed and we were unable to recover it. 00:24:32.809 [2024-07-12 11:28:58.871667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.809 [2024-07-12 11:28:58.871772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.809 [2024-07-12 11:28:58.871797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.809 [2024-07-12 11:28:58.871812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.809 [2024-07-12 11:28:58.871825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.809 [2024-07-12 11:28:58.871856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.809 qpair failed and we were unable to recover it. 00:24:32.809 [2024-07-12 11:28:58.881647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.809 [2024-07-12 11:28:58.881735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.809 [2024-07-12 11:28:58.881760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.809 [2024-07-12 11:28:58.881775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.809 [2024-07-12 11:28:58.881788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.809 [2024-07-12 11:28:58.881818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.809 qpair failed and we were unable to recover it. 00:24:32.809 [2024-07-12 11:28:58.891662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.809 [2024-07-12 11:28:58.891747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.809 [2024-07-12 11:28:58.891771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.809 [2024-07-12 11:28:58.891787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.809 [2024-07-12 11:28:58.891799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.809 [2024-07-12 11:28:58.891829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.809 qpair failed and we were unable to recover it. 00:24:32.809 [2024-07-12 11:28:58.901744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.809 [2024-07-12 11:28:58.901833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.809 [2024-07-12 11:28:58.901863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.809 [2024-07-12 11:28:58.901890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.809 [2024-07-12 11:28:58.901905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.809 [2024-07-12 11:28:58.901935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.809 qpair failed and we were unable to recover it. 00:24:32.809 [2024-07-12 11:28:58.911773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.809 [2024-07-12 11:28:58.911891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.809 [2024-07-12 11:28:58.911916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.809 [2024-07-12 11:28:58.911931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.809 [2024-07-12 11:28:58.911945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.809 [2024-07-12 11:28:58.911976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.809 qpair failed and we were unable to recover it. 00:24:32.809 [2024-07-12 11:28:58.921767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.809 [2024-07-12 11:28:58.921894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.809 [2024-07-12 11:28:58.921919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.809 [2024-07-12 11:28:58.921934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.809 [2024-07-12 11:28:58.921949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.809 [2024-07-12 11:28:58.921979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.809 qpair failed and we were unable to recover it. 00:24:32.809 [2024-07-12 11:28:58.931761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.809 [2024-07-12 11:28:58.931852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.809 [2024-07-12 11:28:58.931883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.809 [2024-07-12 11:28:58.931909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.809 [2024-07-12 11:28:58.931923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:32.809 [2024-07-12 11:28:58.931954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:32.809 qpair failed and we were unable to recover it. 00:24:33.067 [2024-07-12 11:28:58.941796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.067 [2024-07-12 11:28:58.941901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.067 [2024-07-12 11:28:58.941926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.067 [2024-07-12 11:28:58.941941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.067 [2024-07-12 11:28:58.941960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.067 [2024-07-12 11:28:58.941991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.067 qpair failed and we were unable to recover it. 00:24:33.067 [2024-07-12 11:28:58.951848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.067 [2024-07-12 11:28:58.951951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.067 [2024-07-12 11:28:58.951980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.067 [2024-07-12 11:28:58.951997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.067 [2024-07-12 11:28:58.952011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.067 [2024-07-12 11:28:58.952042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.067 qpair failed and we were unable to recover it. 00:24:33.067 [2024-07-12 11:28:58.961892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.067 [2024-07-12 11:28:58.962002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.067 [2024-07-12 11:28:58.962028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.067 [2024-07-12 11:28:58.962043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.067 [2024-07-12 11:28:58.962057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.067 [2024-07-12 11:28:58.962102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.067 qpair failed and we were unable to recover it. 00:24:33.067 [2024-07-12 11:28:58.971919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.067 [2024-07-12 11:28:58.972050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.067 [2024-07-12 11:28:58.972077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.067 [2024-07-12 11:28:58.972093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.067 [2024-07-12 11:28:58.972107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.067 [2024-07-12 11:28:58.972138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.068 qpair failed and we were unable to recover it. 00:24:33.068 [2024-07-12 11:28:58.981926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.068 [2024-07-12 11:28:58.982057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.068 [2024-07-12 11:28:58.982084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.068 [2024-07-12 11:28:58.982100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.068 [2024-07-12 11:28:58.982114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.068 [2024-07-12 11:28:58.982158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.068 qpair failed and we were unable to recover it. 00:24:33.068 [2024-07-12 11:28:58.991964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.068 [2024-07-12 11:28:58.992061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.068 [2024-07-12 11:28:58.992086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.068 [2024-07-12 11:28:58.992101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.068 [2024-07-12 11:28:58.992114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.068 [2024-07-12 11:28:58.992144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.068 qpair failed and we were unable to recover it. 00:24:33.068 [2024-07-12 11:28:59.001984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.068 [2024-07-12 11:28:59.002117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.068 [2024-07-12 11:28:59.002147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.068 [2024-07-12 11:28:59.002164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.068 [2024-07-12 11:28:59.002178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.068 [2024-07-12 11:28:59.002209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.068 qpair failed and we were unable to recover it. 00:24:33.068 [2024-07-12 11:28:59.012042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.068 [2024-07-12 11:28:59.012162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.068 [2024-07-12 11:28:59.012187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.068 [2024-07-12 11:28:59.012202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.068 [2024-07-12 11:28:59.012216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.068 [2024-07-12 11:28:59.012247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.068 qpair failed and we were unable to recover it. 00:24:33.068 [2024-07-12 11:28:59.022065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.068 [2024-07-12 11:28:59.022179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.068 [2024-07-12 11:28:59.022209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.068 [2024-07-12 11:28:59.022225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.068 [2024-07-12 11:28:59.022239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.068 [2024-07-12 11:28:59.022281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.068 qpair failed and we were unable to recover it. 00:24:33.068 [2024-07-12 11:28:59.032069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.068 [2024-07-12 11:28:59.032157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.068 [2024-07-12 11:28:59.032182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.068 [2024-07-12 11:28:59.032202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.068 [2024-07-12 11:28:59.032216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.068 [2024-07-12 11:28:59.032247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.068 qpair failed and we were unable to recover it. 00:24:33.068 [2024-07-12 11:28:59.042096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.068 [2024-07-12 11:28:59.042179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.068 [2024-07-12 11:28:59.042204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.068 [2024-07-12 11:28:59.042219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.068 [2024-07-12 11:28:59.042232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.068 [2024-07-12 11:28:59.042262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.068 qpair failed and we were unable to recover it. 00:24:33.068 [2024-07-12 11:28:59.052106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.068 [2024-07-12 11:28:59.052240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.068 [2024-07-12 11:28:59.052268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.068 [2024-07-12 11:28:59.052284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.068 [2024-07-12 11:28:59.052299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.068 [2024-07-12 11:28:59.052328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.068 qpair failed and we were unable to recover it. 00:24:33.068 [2024-07-12 11:28:59.062121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.068 [2024-07-12 11:28:59.062205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.068 [2024-07-12 11:28:59.062230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.068 [2024-07-12 11:28:59.062245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.068 [2024-07-12 11:28:59.062257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.068 [2024-07-12 11:28:59.062286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.068 qpair failed and we were unable to recover it. 00:24:33.068 [2024-07-12 11:28:59.072160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.068 [2024-07-12 11:28:59.072251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.068 [2024-07-12 11:28:59.072275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.068 [2024-07-12 11:28:59.072290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.068 [2024-07-12 11:28:59.072303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.068 [2024-07-12 11:28:59.072333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.068 qpair failed and we were unable to recover it. 00:24:33.068 [2024-07-12 11:28:59.082235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.068 [2024-07-12 11:28:59.082346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.068 [2024-07-12 11:28:59.082374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.068 [2024-07-12 11:28:59.082391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.068 [2024-07-12 11:28:59.082404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.068 [2024-07-12 11:28:59.082436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.068 qpair failed and we were unable to recover it. 00:24:33.068 [2024-07-12 11:28:59.092228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.068 [2024-07-12 11:28:59.092359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.068 [2024-07-12 11:28:59.092390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.068 [2024-07-12 11:28:59.092406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.068 [2024-07-12 11:28:59.092420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.068 [2024-07-12 11:28:59.092465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.068 qpair failed and we were unable to recover it. 00:24:33.068 [2024-07-12 11:28:59.102248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.068 [2024-07-12 11:28:59.102333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.068 [2024-07-12 11:28:59.102359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.068 [2024-07-12 11:28:59.102374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.068 [2024-07-12 11:28:59.102387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.068 [2024-07-12 11:28:59.102417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.068 qpair failed and we were unable to recover it. 00:24:33.068 [2024-07-12 11:28:59.112289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.068 [2024-07-12 11:28:59.112398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.068 [2024-07-12 11:28:59.112422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.068 [2024-07-12 11:28:59.112437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.068 [2024-07-12 11:28:59.112450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.068 [2024-07-12 11:28:59.112493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.068 qpair failed and we were unable to recover it. 00:24:33.068 [2024-07-12 11:28:59.122287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.068 [2024-07-12 11:28:59.122386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.068 [2024-07-12 11:28:59.122411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.068 [2024-07-12 11:28:59.122432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.068 [2024-07-12 11:28:59.122447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.068 [2024-07-12 11:28:59.122477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.068 qpair failed and we were unable to recover it. 00:24:33.068 [2024-07-12 11:28:59.132373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.068 [2024-07-12 11:28:59.132485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.068 [2024-07-12 11:28:59.132511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.068 [2024-07-12 11:28:59.132526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.068 [2024-07-12 11:28:59.132540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.068 [2024-07-12 11:28:59.132570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.068 qpair failed and we were unable to recover it. 00:24:33.068 [2024-07-12 11:28:59.142371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.068 [2024-07-12 11:28:59.142456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.068 [2024-07-12 11:28:59.142482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.068 [2024-07-12 11:28:59.142497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.068 [2024-07-12 11:28:59.142510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.068 [2024-07-12 11:28:59.142540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.068 qpair failed and we were unable to recover it. 00:24:33.068 [2024-07-12 11:28:59.152382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.068 [2024-07-12 11:28:59.152474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.068 [2024-07-12 11:28:59.152499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.068 [2024-07-12 11:28:59.152514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.068 [2024-07-12 11:28:59.152527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.068 [2024-07-12 11:28:59.152557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.068 qpair failed and we were unable to recover it. 00:24:33.068 [2024-07-12 11:28:59.162419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.068 [2024-07-12 11:28:59.162552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.068 [2024-07-12 11:28:59.162579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.068 [2024-07-12 11:28:59.162595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.068 [2024-07-12 11:28:59.162609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.068 [2024-07-12 11:28:59.162639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.068 qpair failed and we were unable to recover it. 00:24:33.068 [2024-07-12 11:28:59.172473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.068 [2024-07-12 11:28:59.172571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.068 [2024-07-12 11:28:59.172599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.069 [2024-07-12 11:28:59.172615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.069 [2024-07-12 11:28:59.172630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.069 [2024-07-12 11:28:59.172661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.069 qpair failed and we were unable to recover it. 00:24:33.069 [2024-07-12 11:28:59.182469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.069 [2024-07-12 11:28:59.182557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.069 [2024-07-12 11:28:59.182582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.069 [2024-07-12 11:28:59.182597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.069 [2024-07-12 11:28:59.182610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.069 [2024-07-12 11:28:59.182640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.069 qpair failed and we were unable to recover it. 00:24:33.069 [2024-07-12 11:28:59.192526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.069 [2024-07-12 11:28:59.192615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.069 [2024-07-12 11:28:59.192641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.069 [2024-07-12 11:28:59.192655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.069 [2024-07-12 11:28:59.192668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.069 [2024-07-12 11:28:59.192699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.069 qpair failed and we were unable to recover it. 00:24:33.328 [2024-07-12 11:28:59.202540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.328 [2024-07-12 11:28:59.202625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.328 [2024-07-12 11:28:59.202650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.328 [2024-07-12 11:28:59.202666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.328 [2024-07-12 11:28:59.202679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.328 [2024-07-12 11:28:59.202715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.328 qpair failed and we were unable to recover it. 00:24:33.328 [2024-07-12 11:28:59.212546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.328 [2024-07-12 11:28:59.212635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.328 [2024-07-12 11:28:59.212664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.328 [2024-07-12 11:28:59.212680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.328 [2024-07-12 11:28:59.212693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.328 [2024-07-12 11:28:59.212723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.328 qpair failed and we were unable to recover it. 00:24:33.328 [2024-07-12 11:28:59.222600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.328 [2024-07-12 11:28:59.222684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.328 [2024-07-12 11:28:59.222709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.328 [2024-07-12 11:28:59.222724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.328 [2024-07-12 11:28:59.222737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.328 [2024-07-12 11:28:59.222767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.328 qpair failed and we were unable to recover it. 00:24:33.328 [2024-07-12 11:28:59.232637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.328 [2024-07-12 11:28:59.232760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.328 [2024-07-12 11:28:59.232786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.328 [2024-07-12 11:28:59.232802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.328 [2024-07-12 11:28:59.232816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.328 [2024-07-12 11:28:59.232858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.328 qpair failed and we were unable to recover it. 00:24:33.328 [2024-07-12 11:28:59.242657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.328 [2024-07-12 11:28:59.242743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.328 [2024-07-12 11:28:59.242768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.328 [2024-07-12 11:28:59.242783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.328 [2024-07-12 11:28:59.242795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.328 [2024-07-12 11:28:59.242825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.328 qpair failed and we were unable to recover it. 00:24:33.328 [2024-07-12 11:28:59.252671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.328 [2024-07-12 11:28:59.252804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.328 [2024-07-12 11:28:59.252831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.328 [2024-07-12 11:28:59.252847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.328 [2024-07-12 11:28:59.252861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.328 [2024-07-12 11:28:59.252904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.328 qpair failed and we were unable to recover it. 00:24:33.328 [2024-07-12 11:28:59.262744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.328 [2024-07-12 11:28:59.262871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.328 [2024-07-12 11:28:59.262896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.328 [2024-07-12 11:28:59.262911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.328 [2024-07-12 11:28:59.262926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.328 [2024-07-12 11:28:59.262958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.328 qpair failed and we were unable to recover it. 00:24:33.328 [2024-07-12 11:28:59.272730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.328 [2024-07-12 11:28:59.272821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.328 [2024-07-12 11:28:59.272845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.328 [2024-07-12 11:28:59.272859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.328 [2024-07-12 11:28:59.272880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.328 [2024-07-12 11:28:59.272912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.328 qpair failed and we were unable to recover it. 00:24:33.328 [2024-07-12 11:28:59.282756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.328 [2024-07-12 11:28:59.282845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.328 [2024-07-12 11:28:59.282879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.328 [2024-07-12 11:28:59.282897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.328 [2024-07-12 11:28:59.282911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.328 [2024-07-12 11:28:59.282940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.328 qpair failed and we were unable to recover it. 00:24:33.328 [2024-07-12 11:28:59.292813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.328 [2024-07-12 11:28:59.292942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.328 [2024-07-12 11:28:59.292969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.328 [2024-07-12 11:28:59.292985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.328 [2024-07-12 11:28:59.292999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.328 [2024-07-12 11:28:59.293029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.328 qpair failed and we were unable to recover it. 00:24:33.328 [2024-07-12 11:28:59.302807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.328 [2024-07-12 11:28:59.302903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.328 [2024-07-12 11:28:59.302934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.328 [2024-07-12 11:28:59.302949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.328 [2024-07-12 11:28:59.302963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.328 [2024-07-12 11:28:59.302993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.328 qpair failed and we were unable to recover it. 00:24:33.328 [2024-07-12 11:28:59.312850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.328 [2024-07-12 11:28:59.312947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.328 [2024-07-12 11:28:59.312973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.328 [2024-07-12 11:28:59.312988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.328 [2024-07-12 11:28:59.313000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.328 [2024-07-12 11:28:59.313030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.328 qpair failed and we were unable to recover it. 00:24:33.328 [2024-07-12 11:28:59.322876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.328 [2024-07-12 11:28:59.322965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.328 [2024-07-12 11:28:59.322990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.328 [2024-07-12 11:28:59.323005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.328 [2024-07-12 11:28:59.323018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.328 [2024-07-12 11:28:59.323048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.328 qpair failed and we were unable to recover it. 00:24:33.328 [2024-07-12 11:28:59.332897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.328 [2024-07-12 11:28:59.332981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.328 [2024-07-12 11:28:59.333009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.328 [2024-07-12 11:28:59.333025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.328 [2024-07-12 11:28:59.333039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.328 [2024-07-12 11:28:59.333070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.328 qpair failed and we were unable to recover it. 00:24:33.328 [2024-07-12 11:28:59.343048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.328 [2024-07-12 11:28:59.343133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.328 [2024-07-12 11:28:59.343157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.328 [2024-07-12 11:28:59.343172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.328 [2024-07-12 11:28:59.343190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.328 [2024-07-12 11:28:59.343221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.328 qpair failed and we were unable to recover it. 00:24:33.328 [2024-07-12 11:28:59.352963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.328 [2024-07-12 11:28:59.353052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.328 [2024-07-12 11:28:59.353077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.328 [2024-07-12 11:28:59.353091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.328 [2024-07-12 11:28:59.353104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.328 [2024-07-12 11:28:59.353134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.328 qpair failed and we were unable to recover it. 00:24:33.328 [2024-07-12 11:28:59.362996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.328 [2024-07-12 11:28:59.363088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.328 [2024-07-12 11:28:59.363113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.328 [2024-07-12 11:28:59.363128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.328 [2024-07-12 11:28:59.363141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.328 [2024-07-12 11:28:59.363171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.328 qpair failed and we were unable to recover it. 00:24:33.328 [2024-07-12 11:28:59.373019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.328 [2024-07-12 11:28:59.373115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.328 [2024-07-12 11:28:59.373140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.328 [2024-07-12 11:28:59.373155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.328 [2024-07-12 11:28:59.373169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.328 [2024-07-12 11:28:59.373198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.328 qpair failed and we were unable to recover it. 00:24:33.328 [2024-07-12 11:28:59.383034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.328 [2024-07-12 11:28:59.383127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.328 [2024-07-12 11:28:59.383152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.328 [2024-07-12 11:28:59.383167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.328 [2024-07-12 11:28:59.383180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.328 [2024-07-12 11:28:59.383210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.328 qpair failed and we were unable to recover it. 00:24:33.328 [2024-07-12 11:28:59.393104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.328 [2024-07-12 11:28:59.393208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.328 [2024-07-12 11:28:59.393233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.328 [2024-07-12 11:28:59.393248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.328 [2024-07-12 11:28:59.393261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.329 [2024-07-12 11:28:59.393291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.329 qpair failed and we were unable to recover it. 00:24:33.329 [2024-07-12 11:28:59.403088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.329 [2024-07-12 11:28:59.403177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.329 [2024-07-12 11:28:59.403201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.329 [2024-07-12 11:28:59.403215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.329 [2024-07-12 11:28:59.403229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.329 [2024-07-12 11:28:59.403258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.329 qpair failed and we were unable to recover it. 00:24:33.329 [2024-07-12 11:28:59.413139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.329 [2024-07-12 11:28:59.413232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.329 [2024-07-12 11:28:59.413257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.329 [2024-07-12 11:28:59.413272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.329 [2024-07-12 11:28:59.413286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.329 [2024-07-12 11:28:59.413316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.329 qpair failed and we were unable to recover it. 00:24:33.329 [2024-07-12 11:28:59.423147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.329 [2024-07-12 11:28:59.423273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.329 [2024-07-12 11:28:59.423300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.329 [2024-07-12 11:28:59.423316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.329 [2024-07-12 11:28:59.423329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.329 [2024-07-12 11:28:59.423372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.329 qpair failed and we were unable to recover it. 00:24:33.329 [2024-07-12 11:28:59.433176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.329 [2024-07-12 11:28:59.433266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.329 [2024-07-12 11:28:59.433291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.329 [2024-07-12 11:28:59.433305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.329 [2024-07-12 11:28:59.433323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.329 [2024-07-12 11:28:59.433354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.329 qpair failed and we were unable to recover it. 00:24:33.329 [2024-07-12 11:28:59.443212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.329 [2024-07-12 11:28:59.443355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.329 [2024-07-12 11:28:59.443383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.329 [2024-07-12 11:28:59.443399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.329 [2024-07-12 11:28:59.443413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.329 [2024-07-12 11:28:59.443470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.329 qpair failed and we were unable to recover it. 00:24:33.329 [2024-07-12 11:28:59.453235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.329 [2024-07-12 11:28:59.453322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.329 [2024-07-12 11:28:59.453348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.329 [2024-07-12 11:28:59.453363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.329 [2024-07-12 11:28:59.453376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.329 [2024-07-12 11:28:59.453405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.329 qpair failed and we were unable to recover it. 00:24:33.587 [2024-07-12 11:28:59.463371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.587 [2024-07-12 11:28:59.463461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.587 [2024-07-12 11:28:59.463485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.587 [2024-07-12 11:28:59.463500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.587 [2024-07-12 11:28:59.463513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.587 [2024-07-12 11:28:59.463543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.587 qpair failed and we were unable to recover it. 00:24:33.587 [2024-07-12 11:28:59.473331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.587 [2024-07-12 11:28:59.473437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.587 [2024-07-12 11:28:59.473462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.587 [2024-07-12 11:28:59.473476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.587 [2024-07-12 11:28:59.473489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.587 [2024-07-12 11:28:59.473520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.587 qpair failed and we were unable to recover it. 00:24:33.587 [2024-07-12 11:28:59.483395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.587 [2024-07-12 11:28:59.483481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.587 [2024-07-12 11:28:59.483505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.587 [2024-07-12 11:28:59.483520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.587 [2024-07-12 11:28:59.483534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.587 [2024-07-12 11:28:59.483563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.587 qpair failed and we were unable to recover it. 00:24:33.587 [2024-07-12 11:28:59.493324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.587 [2024-07-12 11:28:59.493404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.587 [2024-07-12 11:28:59.493429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.587 [2024-07-12 11:28:59.493444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.587 [2024-07-12 11:28:59.493457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.587 [2024-07-12 11:28:59.493486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.587 qpair failed and we were unable to recover it. 00:24:33.587 [2024-07-12 11:28:59.503366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.587 [2024-07-12 11:28:59.503448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.587 [2024-07-12 11:28:59.503474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.587 [2024-07-12 11:28:59.503489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.587 [2024-07-12 11:28:59.503501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.587 [2024-07-12 11:28:59.503531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.587 qpair failed and we were unable to recover it. 00:24:33.587 [2024-07-12 11:28:59.513411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.587 [2024-07-12 11:28:59.513503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.587 [2024-07-12 11:28:59.513527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.587 [2024-07-12 11:28:59.513542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.587 [2024-07-12 11:28:59.513554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.587 [2024-07-12 11:28:59.513584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.587 qpair failed and we were unable to recover it. 00:24:33.587 [2024-07-12 11:28:59.523458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.587 [2024-07-12 11:28:59.523558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.587 [2024-07-12 11:28:59.523582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.587 [2024-07-12 11:28:59.523605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.587 [2024-07-12 11:28:59.523620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.587 [2024-07-12 11:28:59.523650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.587 qpair failed and we were unable to recover it. 00:24:33.587 [2024-07-12 11:28:59.533545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.587 [2024-07-12 11:28:59.533640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.587 [2024-07-12 11:28:59.533664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.587 [2024-07-12 11:28:59.533679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.587 [2024-07-12 11:28:59.533693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.587 [2024-07-12 11:28:59.533722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.587 qpair failed and we were unable to recover it. 00:24:33.587 [2024-07-12 11:28:59.543498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.587 [2024-07-12 11:28:59.543621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.587 [2024-07-12 11:28:59.543648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.587 [2024-07-12 11:28:59.543663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.587 [2024-07-12 11:28:59.543677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.587 [2024-07-12 11:28:59.543708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.587 qpair failed and we were unable to recover it. 00:24:33.587 [2024-07-12 11:28:59.553550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.587 [2024-07-12 11:28:59.553658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.587 [2024-07-12 11:28:59.553685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.587 [2024-07-12 11:28:59.553701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.587 [2024-07-12 11:28:59.553715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.587 [2024-07-12 11:28:59.553745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.587 qpair failed and we were unable to recover it. 00:24:33.587 [2024-07-12 11:28:59.563546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.587 [2024-07-12 11:28:59.563653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.587 [2024-07-12 11:28:59.563680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.587 [2024-07-12 11:28:59.563706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.587 [2024-07-12 11:28:59.563720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.587 [2024-07-12 11:28:59.563750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.587 qpair failed and we were unable to recover it. 00:24:33.587 [2024-07-12 11:28:59.573556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.587 [2024-07-12 11:28:59.573641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.587 [2024-07-12 11:28:59.573667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.587 [2024-07-12 11:28:59.573683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.587 [2024-07-12 11:28:59.573697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.587 [2024-07-12 11:28:59.573727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.587 qpair failed and we were unable to recover it. 00:24:33.587 [2024-07-12 11:28:59.583601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.587 [2024-07-12 11:28:59.583700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.587 [2024-07-12 11:28:59.583726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.587 [2024-07-12 11:28:59.583742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.587 [2024-07-12 11:28:59.583756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.587 [2024-07-12 11:28:59.583785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.587 qpair failed and we were unable to recover it. 00:24:33.587 [2024-07-12 11:28:59.593637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.587 [2024-07-12 11:28:59.593726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.587 [2024-07-12 11:28:59.593759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.587 [2024-07-12 11:28:59.593775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.587 [2024-07-12 11:28:59.593788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.587 [2024-07-12 11:28:59.593817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.587 qpair failed and we were unable to recover it. 00:24:33.587 [2024-07-12 11:28:59.603666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.587 [2024-07-12 11:28:59.603757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.587 [2024-07-12 11:28:59.603783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.587 [2024-07-12 11:28:59.603799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.587 [2024-07-12 11:28:59.603812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.587 [2024-07-12 11:28:59.603842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.587 qpair failed and we were unable to recover it. 00:24:33.587 [2024-07-12 11:28:59.613698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.587 [2024-07-12 11:28:59.613798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.587 [2024-07-12 11:28:59.613829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.587 [2024-07-12 11:28:59.613845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.587 [2024-07-12 11:28:59.613859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.587 [2024-07-12 11:28:59.613899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.587 qpair failed and we were unable to recover it. 00:24:33.587 [2024-07-12 11:28:59.623708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.587 [2024-07-12 11:28:59.623805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.587 [2024-07-12 11:28:59.623831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.587 [2024-07-12 11:28:59.623846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.587 [2024-07-12 11:28:59.623859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.587 [2024-07-12 11:28:59.623907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.587 qpair failed and we were unable to recover it. 00:24:33.587 [2024-07-12 11:28:59.633736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.587 [2024-07-12 11:28:59.633831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.587 [2024-07-12 11:28:59.633858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.587 [2024-07-12 11:28:59.633881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.587 [2024-07-12 11:28:59.633895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.587 [2024-07-12 11:28:59.633934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.587 qpair failed and we were unable to recover it. 00:24:33.587 [2024-07-12 11:28:59.643770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.588 [2024-07-12 11:28:59.643873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.588 [2024-07-12 11:28:59.643900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.588 [2024-07-12 11:28:59.643928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.588 [2024-07-12 11:28:59.643941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.588 [2024-07-12 11:28:59.643971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.588 qpair failed and we were unable to recover it. 00:24:33.588 [2024-07-12 11:28:59.653791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.588 [2024-07-12 11:28:59.653891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.588 [2024-07-12 11:28:59.653917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.588 [2024-07-12 11:28:59.653932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.588 [2024-07-12 11:28:59.653945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.588 [2024-07-12 11:28:59.653981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.588 qpair failed and we were unable to recover it. 00:24:33.588 [2024-07-12 11:28:59.663832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.588 [2024-07-12 11:28:59.663923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.588 [2024-07-12 11:28:59.663949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.588 [2024-07-12 11:28:59.663965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.588 [2024-07-12 11:28:59.663978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.588 [2024-07-12 11:28:59.664007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.588 qpair failed and we were unable to recover it. 00:24:33.588 [2024-07-12 11:28:59.673880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.588 [2024-07-12 11:28:59.673980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.588 [2024-07-12 11:28:59.674006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.588 [2024-07-12 11:28:59.674023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.588 [2024-07-12 11:28:59.674038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.588 [2024-07-12 11:28:59.674069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.588 qpair failed and we were unable to recover it. 00:24:33.588 [2024-07-12 11:28:59.683891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.588 [2024-07-12 11:28:59.683979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.588 [2024-07-12 11:28:59.684005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.588 [2024-07-12 11:28:59.684021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.588 [2024-07-12 11:28:59.684037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.588 [2024-07-12 11:28:59.684067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.588 qpair failed and we were unable to recover it. 00:24:33.588 [2024-07-12 11:28:59.693965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.588 [2024-07-12 11:28:59.694050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.588 [2024-07-12 11:28:59.694075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.588 [2024-07-12 11:28:59.694091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.588 [2024-07-12 11:28:59.694104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.588 [2024-07-12 11:28:59.694135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.588 qpair failed and we were unable to recover it. 00:24:33.588 [2024-07-12 11:28:59.703974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.588 [2024-07-12 11:28:59.704069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.588 [2024-07-12 11:28:59.704100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.588 [2024-07-12 11:28:59.704116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.588 [2024-07-12 11:28:59.704129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.588 [2024-07-12 11:28:59.704160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.588 qpair failed and we were unable to recover it. 00:24:33.588 [2024-07-12 11:28:59.714006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.588 [2024-07-12 11:28:59.714095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.588 [2024-07-12 11:28:59.714121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.588 [2024-07-12 11:28:59.714136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.588 [2024-07-12 11:28:59.714150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.588 [2024-07-12 11:28:59.714180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.588 qpair failed and we were unable to recover it. 00:24:33.846 [2024-07-12 11:28:59.724006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.846 [2024-07-12 11:28:59.724097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.846 [2024-07-12 11:28:59.724123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.846 [2024-07-12 11:28:59.724138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.846 [2024-07-12 11:28:59.724151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.846 [2024-07-12 11:28:59.724182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.846 qpair failed and we were unable to recover it. 00:24:33.846 [2024-07-12 11:28:59.734106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.846 [2024-07-12 11:28:59.734220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.846 [2024-07-12 11:28:59.734246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.846 [2024-07-12 11:28:59.734262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.846 [2024-07-12 11:28:59.734275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.846 [2024-07-12 11:28:59.734319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.846 qpair failed and we were unable to recover it. 00:24:33.846 [2024-07-12 11:28:59.744079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.846 [2024-07-12 11:28:59.744172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.846 [2024-07-12 11:28:59.744198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.846 [2024-07-12 11:28:59.744214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.846 [2024-07-12 11:28:59.744233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.846 [2024-07-12 11:28:59.744264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.846 qpair failed and we were unable to recover it. 00:24:33.846 [2024-07-12 11:28:59.754108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.846 [2024-07-12 11:28:59.754200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.846 [2024-07-12 11:28:59.754226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.846 [2024-07-12 11:28:59.754242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.846 [2024-07-12 11:28:59.754255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.846 [2024-07-12 11:28:59.754285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.846 qpair failed and we were unable to recover it. 00:24:33.846 [2024-07-12 11:28:59.764125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.846 [2024-07-12 11:28:59.764213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.846 [2024-07-12 11:28:59.764239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.846 [2024-07-12 11:28:59.764255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.846 [2024-07-12 11:28:59.764269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.846 [2024-07-12 11:28:59.764300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.846 qpair failed and we were unable to recover it. 00:24:33.846 [2024-07-12 11:28:59.774145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.846 [2024-07-12 11:28:59.774231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.846 [2024-07-12 11:28:59.774257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.846 [2024-07-12 11:28:59.774272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.846 [2024-07-12 11:28:59.774286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.846 [2024-07-12 11:28:59.774316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.846 qpair failed and we were unable to recover it. 00:24:33.846 [2024-07-12 11:28:59.784174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.846 [2024-07-12 11:28:59.784256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.846 [2024-07-12 11:28:59.784282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.846 [2024-07-12 11:28:59.784298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.846 [2024-07-12 11:28:59.784311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.846 [2024-07-12 11:28:59.784342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.846 qpair failed and we were unable to recover it. 00:24:33.846 [2024-07-12 11:28:59.794207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.846 [2024-07-12 11:28:59.794306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.846 [2024-07-12 11:28:59.794332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.846 [2024-07-12 11:28:59.794347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.846 [2024-07-12 11:28:59.794360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.846 [2024-07-12 11:28:59.794391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.846 qpair failed and we were unable to recover it. 00:24:33.846 [2024-07-12 11:28:59.804226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.846 [2024-07-12 11:28:59.804312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.846 [2024-07-12 11:28:59.804339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.846 [2024-07-12 11:28:59.804354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.846 [2024-07-12 11:28:59.804367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.846 [2024-07-12 11:28:59.804398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.846 qpair failed and we were unable to recover it. 00:24:33.846 [2024-07-12 11:28:59.814249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.846 [2024-07-12 11:28:59.814349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.846 [2024-07-12 11:28:59.814376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.846 [2024-07-12 11:28:59.814391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.846 [2024-07-12 11:28:59.814405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.846 [2024-07-12 11:28:59.814435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.846 qpair failed and we were unable to recover it. 00:24:33.846 [2024-07-12 11:28:59.824273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.846 [2024-07-12 11:28:59.824378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.846 [2024-07-12 11:28:59.824404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.846 [2024-07-12 11:28:59.824419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.846 [2024-07-12 11:28:59.824432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.846 [2024-07-12 11:28:59.824462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.846 qpair failed and we were unable to recover it. 00:24:33.847 [2024-07-12 11:28:59.834356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.847 [2024-07-12 11:28:59.834445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.847 [2024-07-12 11:28:59.834471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.847 [2024-07-12 11:28:59.834488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.847 [2024-07-12 11:28:59.834507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.847 [2024-07-12 11:28:59.834538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.847 qpair failed and we were unable to recover it. 00:24:33.847 [2024-07-12 11:28:59.844371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.847 [2024-07-12 11:28:59.844456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.847 [2024-07-12 11:28:59.844482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.847 [2024-07-12 11:28:59.844498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.847 [2024-07-12 11:28:59.844511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.847 [2024-07-12 11:28:59.844542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.847 qpair failed and we were unable to recover it. 00:24:33.847 [2024-07-12 11:28:59.854437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.847 [2024-07-12 11:28:59.854543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.847 [2024-07-12 11:28:59.854570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.847 [2024-07-12 11:28:59.854585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.847 [2024-07-12 11:28:59.854598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.847 [2024-07-12 11:28:59.854642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.847 qpair failed and we were unable to recover it. 00:24:33.847 [2024-07-12 11:28:59.864392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.847 [2024-07-12 11:28:59.864475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.847 [2024-07-12 11:28:59.864501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.847 [2024-07-12 11:28:59.864516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.847 [2024-07-12 11:28:59.864531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.847 [2024-07-12 11:28:59.864561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.847 qpair failed and we were unable to recover it. 00:24:33.847 [2024-07-12 11:28:59.874518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.847 [2024-07-12 11:28:59.874663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.847 [2024-07-12 11:28:59.874689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.847 [2024-07-12 11:28:59.874705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.847 [2024-07-12 11:28:59.874719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.847 [2024-07-12 11:28:59.874764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.847 qpair failed and we were unable to recover it. 00:24:33.847 [2024-07-12 11:28:59.884455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.847 [2024-07-12 11:28:59.884543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.847 [2024-07-12 11:28:59.884569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.847 [2024-07-12 11:28:59.884584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.847 [2024-07-12 11:28:59.884597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.847 [2024-07-12 11:28:59.884629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.847 qpair failed and we were unable to recover it. 00:24:33.847 [2024-07-12 11:28:59.894494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.847 [2024-07-12 11:28:59.894610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.847 [2024-07-12 11:28:59.894637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.847 [2024-07-12 11:28:59.894652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.847 [2024-07-12 11:28:59.894665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.847 [2024-07-12 11:28:59.894696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.847 qpair failed and we were unable to recover it. 00:24:33.847 [2024-07-12 11:28:59.904524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.847 [2024-07-12 11:28:59.904607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.847 [2024-07-12 11:28:59.904634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.847 [2024-07-12 11:28:59.904650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.847 [2024-07-12 11:28:59.904665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.847 [2024-07-12 11:28:59.904708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.847 qpair failed and we were unable to recover it. 00:24:33.847 [2024-07-12 11:28:59.914541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.847 [2024-07-12 11:28:59.914633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.847 [2024-07-12 11:28:59.914659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.847 [2024-07-12 11:28:59.914675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.847 [2024-07-12 11:28:59.914688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.847 [2024-07-12 11:28:59.914719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.847 qpair failed and we were unable to recover it. 00:24:33.847 [2024-07-12 11:28:59.924573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.847 [2024-07-12 11:28:59.924660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.847 [2024-07-12 11:28:59.924687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.847 [2024-07-12 11:28:59.924710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.847 [2024-07-12 11:28:59.924724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.847 [2024-07-12 11:28:59.924768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.847 qpair failed and we were unable to recover it. 00:24:33.847 [2024-07-12 11:28:59.934611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.847 [2024-07-12 11:28:59.934705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.847 [2024-07-12 11:28:59.934732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.847 [2024-07-12 11:28:59.934748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.847 [2024-07-12 11:28:59.934761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.847 [2024-07-12 11:28:59.934792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.847 qpair failed and we were unable to recover it. 00:24:33.847 [2024-07-12 11:28:59.944665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.847 [2024-07-12 11:28:59.944755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.847 [2024-07-12 11:28:59.944782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.847 [2024-07-12 11:28:59.944798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.847 [2024-07-12 11:28:59.944813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.847 [2024-07-12 11:28:59.944843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.847 qpair failed and we were unable to recover it. 00:24:33.847 [2024-07-12 11:28:59.954656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.847 [2024-07-12 11:28:59.954749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.847 [2024-07-12 11:28:59.954775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.847 [2024-07-12 11:28:59.954791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.847 [2024-07-12 11:28:59.954806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.847 [2024-07-12 11:28:59.954837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.847 qpair failed and we were unable to recover it. 00:24:33.847 [2024-07-12 11:28:59.964697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.847 [2024-07-12 11:28:59.964828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.847 [2024-07-12 11:28:59.964853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.847 [2024-07-12 11:28:59.964875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.847 [2024-07-12 11:28:59.964891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.847 [2024-07-12 11:28:59.964922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.847 qpair failed and we were unable to recover it. 00:24:33.847 [2024-07-12 11:28:59.974717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.847 [2024-07-12 11:28:59.974805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.847 [2024-07-12 11:28:59.974832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.847 [2024-07-12 11:28:59.974847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.847 [2024-07-12 11:28:59.974861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:33.847 [2024-07-12 11:28:59.974917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:33.847 qpair failed and we were unable to recover it. 00:24:34.106 [2024-07-12 11:28:59.984733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.106 [2024-07-12 11:28:59.984815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.106 [2024-07-12 11:28:59.984841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.106 [2024-07-12 11:28:59.984856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.106 [2024-07-12 11:28:59.984876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.106 [2024-07-12 11:28:59.984909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.106 qpair failed and we were unable to recover it. 00:24:34.106 [2024-07-12 11:28:59.994800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.106 [2024-07-12 11:28:59.994895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.106 [2024-07-12 11:28:59.994922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.106 [2024-07-12 11:28:59.994938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.106 [2024-07-12 11:28:59.994951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.106 [2024-07-12 11:28:59.994982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.106 qpair failed and we were unable to recover it. 00:24:34.106 [2024-07-12 11:29:00.004806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.106 [2024-07-12 11:29:00.004935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.106 [2024-07-12 11:29:00.004962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.106 [2024-07-12 11:29:00.004978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.106 [2024-07-12 11:29:00.004992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.106 [2024-07-12 11:29:00.005023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.106 qpair failed and we were unable to recover it. 00:24:34.106 [2024-07-12 11:29:00.014838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.106 [2024-07-12 11:29:00.014931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.106 [2024-07-12 11:29:00.014963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.107 [2024-07-12 11:29:00.014980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.107 [2024-07-12 11:29:00.014996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.107 [2024-07-12 11:29:00.015027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.107 qpair failed and we were unable to recover it. 00:24:34.107 [2024-07-12 11:29:00.024896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.107 [2024-07-12 11:29:00.024981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.107 [2024-07-12 11:29:00.025006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.107 [2024-07-12 11:29:00.025022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.107 [2024-07-12 11:29:00.025035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.107 [2024-07-12 11:29:00.025066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.107 qpair failed and we were unable to recover it. 00:24:34.107 [2024-07-12 11:29:00.034925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.107 [2024-07-12 11:29:00.035040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.107 [2024-07-12 11:29:00.035068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.107 [2024-07-12 11:29:00.035084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.107 [2024-07-12 11:29:00.035097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.107 [2024-07-12 11:29:00.035128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.107 qpair failed and we were unable to recover it. 00:24:34.107 [2024-07-12 11:29:00.044961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.107 [2024-07-12 11:29:00.045046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.107 [2024-07-12 11:29:00.045072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.107 [2024-07-12 11:29:00.045088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.107 [2024-07-12 11:29:00.045101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.107 [2024-07-12 11:29:00.045131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.107 qpair failed and we were unable to recover it. 00:24:34.107 [2024-07-12 11:29:00.054983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.107 [2024-07-12 11:29:00.055070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.107 [2024-07-12 11:29:00.055097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.107 [2024-07-12 11:29:00.055113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.107 [2024-07-12 11:29:00.055126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.107 [2024-07-12 11:29:00.055174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.107 qpair failed and we were unable to recover it. 00:24:34.107 [2024-07-12 11:29:00.064984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.107 [2024-07-12 11:29:00.065070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.107 [2024-07-12 11:29:00.065097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.107 [2024-07-12 11:29:00.065113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.107 [2024-07-12 11:29:00.065126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.107 [2024-07-12 11:29:00.065157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.107 qpair failed and we were unable to recover it. 00:24:34.107 [2024-07-12 11:29:00.075019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.107 [2024-07-12 11:29:00.075113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.107 [2024-07-12 11:29:00.075139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.107 [2024-07-12 11:29:00.075154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.107 [2024-07-12 11:29:00.075167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.107 [2024-07-12 11:29:00.075198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.107 qpair failed and we were unable to recover it. 00:24:34.107 [2024-07-12 11:29:00.085034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.107 [2024-07-12 11:29:00.085123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.107 [2024-07-12 11:29:00.085149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.107 [2024-07-12 11:29:00.085164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.107 [2024-07-12 11:29:00.085177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.107 [2024-07-12 11:29:00.085208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.107 qpair failed and we were unable to recover it. 00:24:34.107 [2024-07-12 11:29:00.095089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.107 [2024-07-12 11:29:00.095215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.107 [2024-07-12 11:29:00.095241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.107 [2024-07-12 11:29:00.095256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.107 [2024-07-12 11:29:00.095270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.107 [2024-07-12 11:29:00.095300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.107 qpair failed and we were unable to recover it. 00:24:34.107 [2024-07-12 11:29:00.105104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.107 [2024-07-12 11:29:00.105191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.107 [2024-07-12 11:29:00.105222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.107 [2024-07-12 11:29:00.105238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.107 [2024-07-12 11:29:00.105253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.107 [2024-07-12 11:29:00.105284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.107 qpair failed and we were unable to recover it. 00:24:34.107 [2024-07-12 11:29:00.115139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.107 [2024-07-12 11:29:00.115264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.107 [2024-07-12 11:29:00.115290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.107 [2024-07-12 11:29:00.115306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.107 [2024-07-12 11:29:00.115319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.107 [2024-07-12 11:29:00.115349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.107 qpair failed and we were unable to recover it. 00:24:34.107 [2024-07-12 11:29:00.125245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.107 [2024-07-12 11:29:00.125358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.107 [2024-07-12 11:29:00.125384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.107 [2024-07-12 11:29:00.125400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.107 [2024-07-12 11:29:00.125413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.107 [2024-07-12 11:29:00.125444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.107 qpair failed and we were unable to recover it. 00:24:34.107 [2024-07-12 11:29:00.135183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.107 [2024-07-12 11:29:00.135265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.107 [2024-07-12 11:29:00.135291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.107 [2024-07-12 11:29:00.135306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.107 [2024-07-12 11:29:00.135319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.107 [2024-07-12 11:29:00.135363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.107 qpair failed and we were unable to recover it. 00:24:34.107 [2024-07-12 11:29:00.145206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.107 [2024-07-12 11:29:00.145296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.107 [2024-07-12 11:29:00.145322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.107 [2024-07-12 11:29:00.145337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.108 [2024-07-12 11:29:00.145351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.108 [2024-07-12 11:29:00.145386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.108 qpair failed and we were unable to recover it. 00:24:34.108 [2024-07-12 11:29:00.155251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.108 [2024-07-12 11:29:00.155375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.108 [2024-07-12 11:29:00.155401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.108 [2024-07-12 11:29:00.155416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.108 [2024-07-12 11:29:00.155430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.108 [2024-07-12 11:29:00.155460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.108 qpair failed and we were unable to recover it. 00:24:34.108 [2024-07-12 11:29:00.165275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.108 [2024-07-12 11:29:00.165362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.108 [2024-07-12 11:29:00.165388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.108 [2024-07-12 11:29:00.165404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.108 [2024-07-12 11:29:00.165417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.108 [2024-07-12 11:29:00.165447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.108 qpair failed and we were unable to recover it. 00:24:34.108 [2024-07-12 11:29:00.175290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.108 [2024-07-12 11:29:00.175374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.108 [2024-07-12 11:29:00.175400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.108 [2024-07-12 11:29:00.175416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.108 [2024-07-12 11:29:00.175429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.108 [2024-07-12 11:29:00.175460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.108 qpair failed and we were unable to recover it. 00:24:34.108 [2024-07-12 11:29:00.185384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.108 [2024-07-12 11:29:00.185471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.108 [2024-07-12 11:29:00.185497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.108 [2024-07-12 11:29:00.185512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.108 [2024-07-12 11:29:00.185526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.108 [2024-07-12 11:29:00.185556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.108 qpair failed and we were unable to recover it. 00:24:34.108 [2024-07-12 11:29:00.195424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.108 [2024-07-12 11:29:00.195532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.108 [2024-07-12 11:29:00.195558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.108 [2024-07-12 11:29:00.195574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.108 [2024-07-12 11:29:00.195587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.108 [2024-07-12 11:29:00.195618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.108 qpair failed and we were unable to recover it. 00:24:34.108 [2024-07-12 11:29:00.205368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.108 [2024-07-12 11:29:00.205451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.108 [2024-07-12 11:29:00.205477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.108 [2024-07-12 11:29:00.205492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.108 [2024-07-12 11:29:00.205505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.108 [2024-07-12 11:29:00.205537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.108 qpair failed and we were unable to recover it. 00:24:34.108 [2024-07-12 11:29:00.215415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.108 [2024-07-12 11:29:00.215498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.108 [2024-07-12 11:29:00.215525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.108 [2024-07-12 11:29:00.215541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.108 [2024-07-12 11:29:00.215554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.108 [2024-07-12 11:29:00.215586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.108 qpair failed and we were unable to recover it. 00:24:34.108 [2024-07-12 11:29:00.225423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.108 [2024-07-12 11:29:00.225508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.108 [2024-07-12 11:29:00.225534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.108 [2024-07-12 11:29:00.225550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.108 [2024-07-12 11:29:00.225563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.108 [2024-07-12 11:29:00.225594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.108 qpair failed and we were unable to recover it. 00:24:34.108 [2024-07-12 11:29:00.235455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.108 [2024-07-12 11:29:00.235540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.108 [2024-07-12 11:29:00.235565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.108 [2024-07-12 11:29:00.235581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.108 [2024-07-12 11:29:00.235601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.108 [2024-07-12 11:29:00.235631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.108 qpair failed and we were unable to recover it. 00:24:34.366 [2024-07-12 11:29:00.245482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.366 [2024-07-12 11:29:00.245574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.366 [2024-07-12 11:29:00.245600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.366 [2024-07-12 11:29:00.245615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.366 [2024-07-12 11:29:00.245629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.366 [2024-07-12 11:29:00.245660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.366 qpair failed and we were unable to recover it. 00:24:34.366 [2024-07-12 11:29:00.255520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.366 [2024-07-12 11:29:00.255614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.366 [2024-07-12 11:29:00.255641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.366 [2024-07-12 11:29:00.255657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.366 [2024-07-12 11:29:00.255670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.366 [2024-07-12 11:29:00.255701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.366 qpair failed and we were unable to recover it. 00:24:34.366 [2024-07-12 11:29:00.265544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.366 [2024-07-12 11:29:00.265628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.366 [2024-07-12 11:29:00.265653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.366 [2024-07-12 11:29:00.265669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.366 [2024-07-12 11:29:00.265682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.366 [2024-07-12 11:29:00.265712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.366 qpair failed and we were unable to recover it. 00:24:34.366 [2024-07-12 11:29:00.275611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.366 [2024-07-12 11:29:00.275721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.366 [2024-07-12 11:29:00.275747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.366 [2024-07-12 11:29:00.275763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.366 [2024-07-12 11:29:00.275776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.366 [2024-07-12 11:29:00.275807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.366 qpair failed and we were unable to recover it. 00:24:34.366 [2024-07-12 11:29:00.285605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.366 [2024-07-12 11:29:00.285693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.366 [2024-07-12 11:29:00.285720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.366 [2024-07-12 11:29:00.285735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.366 [2024-07-12 11:29:00.285749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.366 [2024-07-12 11:29:00.285779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.366 qpair failed and we were unable to recover it. 00:24:34.366 [2024-07-12 11:29:00.295637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.366 [2024-07-12 11:29:00.295728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.366 [2024-07-12 11:29:00.295754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.366 [2024-07-12 11:29:00.295770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.366 [2024-07-12 11:29:00.295783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.366 [2024-07-12 11:29:00.295813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.366 qpair failed and we were unable to recover it. 00:24:34.366 [2024-07-12 11:29:00.305665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.366 [2024-07-12 11:29:00.305752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.366 [2024-07-12 11:29:00.305779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.366 [2024-07-12 11:29:00.305794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.366 [2024-07-12 11:29:00.305808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.366 [2024-07-12 11:29:00.305838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.366 qpair failed and we were unable to recover it. 00:24:34.366 [2024-07-12 11:29:00.315680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.366 [2024-07-12 11:29:00.315804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.366 [2024-07-12 11:29:00.315830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.366 [2024-07-12 11:29:00.315845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.366 [2024-07-12 11:29:00.315858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.366 [2024-07-12 11:29:00.315900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.366 qpair failed and we were unable to recover it. 00:24:34.366 [2024-07-12 11:29:00.325735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.366 [2024-07-12 11:29:00.325860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.366 [2024-07-12 11:29:00.325893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.366 [2024-07-12 11:29:00.325915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.366 [2024-07-12 11:29:00.325928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.366 [2024-07-12 11:29:00.325958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.366 qpair failed and we were unable to recover it. 00:24:34.366 [2024-07-12 11:29:00.335728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.366 [2024-07-12 11:29:00.335814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.366 [2024-07-12 11:29:00.335841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.366 [2024-07-12 11:29:00.335856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.366 [2024-07-12 11:29:00.335877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.366 [2024-07-12 11:29:00.335923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.366 qpair failed and we were unable to recover it. 00:24:34.366 [2024-07-12 11:29:00.345764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.366 [2024-07-12 11:29:00.345853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.366 [2024-07-12 11:29:00.345886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.366 [2024-07-12 11:29:00.345903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.366 [2024-07-12 11:29:00.345916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.366 [2024-07-12 11:29:00.345947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.366 qpair failed and we were unable to recover it. 00:24:34.366 [2024-07-12 11:29:00.355830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.366 [2024-07-12 11:29:00.355961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.366 [2024-07-12 11:29:00.355988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.366 [2024-07-12 11:29:00.356004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.366 [2024-07-12 11:29:00.356017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.366 [2024-07-12 11:29:00.356048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.366 qpair failed and we were unable to recover it. 00:24:34.366 [2024-07-12 11:29:00.365819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.366 [2024-07-12 11:29:00.365918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.366 [2024-07-12 11:29:00.365944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.366 [2024-07-12 11:29:00.365960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.366 [2024-07-12 11:29:00.365973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.366 [2024-07-12 11:29:00.366005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.366 qpair failed and we were unable to recover it. 00:24:34.366 [2024-07-12 11:29:00.375919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.366 [2024-07-12 11:29:00.376013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.366 [2024-07-12 11:29:00.376038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.366 [2024-07-12 11:29:00.376053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.366 [2024-07-12 11:29:00.376065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.366 [2024-07-12 11:29:00.376094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.366 qpair failed and we were unable to recover it. 00:24:34.366 [2024-07-12 11:29:00.385875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.367 [2024-07-12 11:29:00.385959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.367 [2024-07-12 11:29:00.385985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.367 [2024-07-12 11:29:00.386000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.367 [2024-07-12 11:29:00.386013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.367 [2024-07-12 11:29:00.386044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.367 qpair failed and we were unable to recover it. 00:24:34.367 [2024-07-12 11:29:00.395933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.367 [2024-07-12 11:29:00.396028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.367 [2024-07-12 11:29:00.396058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.367 [2024-07-12 11:29:00.396074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.367 [2024-07-12 11:29:00.396087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.367 [2024-07-12 11:29:00.396119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.367 qpair failed and we were unable to recover it. 00:24:34.367 [2024-07-12 11:29:00.405956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.367 [2024-07-12 11:29:00.406049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.367 [2024-07-12 11:29:00.406075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.367 [2024-07-12 11:29:00.406091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.367 [2024-07-12 11:29:00.406105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.367 [2024-07-12 11:29:00.406135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.367 qpair failed and we were unable to recover it. 00:24:34.367 [2024-07-12 11:29:00.415974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.367 [2024-07-12 11:29:00.416090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.367 [2024-07-12 11:29:00.416117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.367 [2024-07-12 11:29:00.416138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.367 [2024-07-12 11:29:00.416154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.367 [2024-07-12 11:29:00.416198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.367 qpair failed and we were unable to recover it. 00:24:34.367 [2024-07-12 11:29:00.425984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.367 [2024-07-12 11:29:00.426095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.367 [2024-07-12 11:29:00.426122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.367 [2024-07-12 11:29:00.426137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.367 [2024-07-12 11:29:00.426152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.367 [2024-07-12 11:29:00.426182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.367 qpair failed and we were unable to recover it. 00:24:34.367 [2024-07-12 11:29:00.436053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.367 [2024-07-12 11:29:00.436146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.367 [2024-07-12 11:29:00.436175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.367 [2024-07-12 11:29:00.436191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.367 [2024-07-12 11:29:00.436205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.367 [2024-07-12 11:29:00.436236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.367 qpair failed and we were unable to recover it. 00:24:34.367 [2024-07-12 11:29:00.446170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.367 [2024-07-12 11:29:00.446301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.367 [2024-07-12 11:29:00.446327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.367 [2024-07-12 11:29:00.446343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.367 [2024-07-12 11:29:00.446356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.367 [2024-07-12 11:29:00.446386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.367 qpair failed and we were unable to recover it. 00:24:34.367 [2024-07-12 11:29:00.456123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.367 [2024-07-12 11:29:00.456246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.367 [2024-07-12 11:29:00.456271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.367 [2024-07-12 11:29:00.456287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.367 [2024-07-12 11:29:00.456300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.367 [2024-07-12 11:29:00.456346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.367 qpair failed and we were unable to recover it. 00:24:34.367 [2024-07-12 11:29:00.466111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.367 [2024-07-12 11:29:00.466194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.367 [2024-07-12 11:29:00.466220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.367 [2024-07-12 11:29:00.466236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.367 [2024-07-12 11:29:00.466249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.367 [2024-07-12 11:29:00.466280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.367 qpair failed and we were unable to recover it. 00:24:34.367 [2024-07-12 11:29:00.476191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.367 [2024-07-12 11:29:00.476283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.367 [2024-07-12 11:29:00.476310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.367 [2024-07-12 11:29:00.476326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.367 [2024-07-12 11:29:00.476339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.367 [2024-07-12 11:29:00.476370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.367 qpair failed and we were unable to recover it. 00:24:34.367 [2024-07-12 11:29:00.486158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.367 [2024-07-12 11:29:00.486245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.367 [2024-07-12 11:29:00.486271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.367 [2024-07-12 11:29:00.486287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.367 [2024-07-12 11:29:00.486302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.367 [2024-07-12 11:29:00.486332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.367 qpair failed and we were unable to recover it. 00:24:34.367 [2024-07-12 11:29:00.496193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.367 [2024-07-12 11:29:00.496279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.367 [2024-07-12 11:29:00.496305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.367 [2024-07-12 11:29:00.496320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.367 [2024-07-12 11:29:00.496334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.367 [2024-07-12 11:29:00.496365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.367 qpair failed and we were unable to recover it. 00:24:34.625 [2024-07-12 11:29:00.506201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.625 [2024-07-12 11:29:00.506299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.625 [2024-07-12 11:29:00.506331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.625 [2024-07-12 11:29:00.506347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.625 [2024-07-12 11:29:00.506362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.625 [2024-07-12 11:29:00.506393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.625 qpair failed and we were unable to recover it. 00:24:34.625 [2024-07-12 11:29:00.516238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.625 [2024-07-12 11:29:00.516329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.625 [2024-07-12 11:29:00.516355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.625 [2024-07-12 11:29:00.516372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.625 [2024-07-12 11:29:00.516386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.625 [2024-07-12 11:29:00.516417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.625 qpair failed and we were unable to recover it. 00:24:34.625 [2024-07-12 11:29:00.526271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.625 [2024-07-12 11:29:00.526389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.625 [2024-07-12 11:29:00.526415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.625 [2024-07-12 11:29:00.526431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.625 [2024-07-12 11:29:00.526446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.625 [2024-07-12 11:29:00.526476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.625 qpair failed and we were unable to recover it. 00:24:34.625 [2024-07-12 11:29:00.536313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.625 [2024-07-12 11:29:00.536439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.625 [2024-07-12 11:29:00.536465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.625 [2024-07-12 11:29:00.536480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.625 [2024-07-12 11:29:00.536494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.625 [2024-07-12 11:29:00.536524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.625 qpair failed and we were unable to recover it. 00:24:34.625 [2024-07-12 11:29:00.546316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.625 [2024-07-12 11:29:00.546405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.625 [2024-07-12 11:29:00.546431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.625 [2024-07-12 11:29:00.546447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.625 [2024-07-12 11:29:00.546460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.625 [2024-07-12 11:29:00.546497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.625 qpair failed and we were unable to recover it. 00:24:34.625 [2024-07-12 11:29:00.556392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.625 [2024-07-12 11:29:00.556489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.625 [2024-07-12 11:29:00.556518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.625 [2024-07-12 11:29:00.556535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.625 [2024-07-12 11:29:00.556551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.625 [2024-07-12 11:29:00.556583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.625 qpair failed and we were unable to recover it. 00:24:34.625 [2024-07-12 11:29:00.566388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.625 [2024-07-12 11:29:00.566478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.625 [2024-07-12 11:29:00.566505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.625 [2024-07-12 11:29:00.566521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.625 [2024-07-12 11:29:00.566534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.625 [2024-07-12 11:29:00.566566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.625 qpair failed and we were unable to recover it. 00:24:34.625 [2024-07-12 11:29:00.576420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.625 [2024-07-12 11:29:00.576535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.625 [2024-07-12 11:29:00.576561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.625 [2024-07-12 11:29:00.576576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.625 [2024-07-12 11:29:00.576589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.625 [2024-07-12 11:29:00.576620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.625 qpair failed and we were unable to recover it. 00:24:34.625 [2024-07-12 11:29:00.586460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.625 [2024-07-12 11:29:00.586547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.625 [2024-07-12 11:29:00.586573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.625 [2024-07-12 11:29:00.586589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.625 [2024-07-12 11:29:00.586603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0d8000b90 00:24:34.625 [2024-07-12 11:29:00.586633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:34.625 qpair failed and we were unable to recover it. 00:24:34.625 [2024-07-12 11:29:00.596510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.625 [2024-07-12 11:29:00.596600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.625 [2024-07-12 11:29:00.596636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.625 [2024-07-12 11:29:00.596653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.625 [2024-07-12 11:29:00.596666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0e0000b90 00:24:34.625 [2024-07-12 11:29:00.596697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:34.625 qpair failed and we were unable to recover it. 00:24:34.625 [2024-07-12 11:29:00.596740] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:24:34.625 A controller has encountered a failure and is being reset. 00:24:34.625 [2024-07-12 11:29:00.606518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.625 [2024-07-12 11:29:00.606606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.625 [2024-07-12 11:29:00.606637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.625 [2024-07-12 11:29:00.606653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.625 [2024-07-12 11:29:00.606666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0e8000b90 00:24:34.625 [2024-07-12 11:29:00.606697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:34.625 qpair failed and we were unable to recover it. 00:24:34.625 [2024-07-12 11:29:00.616589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.625 [2024-07-12 11:29:00.616692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.625 [2024-07-12 11:29:00.616721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.625 [2024-07-12 11:29:00.616736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.625 [2024-07-12 11:29:00.616750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0e8000b90 00:24:34.625 [2024-07-12 11:29:00.616780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:34.625 qpair failed and we were unable to recover it. 00:24:34.883 Controller properly reset. 00:24:34.883 Initializing NVMe Controllers 00:24:34.883 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:34.883 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:34.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:24:34.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:24:34.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:24:34.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:24:34.883 Initialization complete. Launching workers. 00:24:34.883 Starting thread on core 1 00:24:34.883 Starting thread on core 2 00:24:34.883 Starting thread on core 3 00:24:34.883 Starting thread on core 0 00:24:34.883 11:29:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:24:34.883 00:24:34.883 real 0m10.983s 00:24:34.883 user 0m19.008s 00:24:34.883 sys 0m5.433s 00:24:34.883 11:29:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:34.883 11:29:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:34.883 ************************************ 00:24:34.883 END TEST nvmf_target_disconnect_tc2 00:24:34.883 ************************************ 00:24:34.883 11:29:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:24:34.883 11:29:00 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:24:34.883 11:29:00 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:24:34.883 11:29:00 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:24:34.883 11:29:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:34.883 11:29:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:24:34.883 11:29:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:34.883 11:29:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:24:34.883 11:29:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:34.883 11:29:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:34.883 rmmod nvme_tcp 00:24:34.883 rmmod nvme_fabrics 00:24:34.883 rmmod nvme_keyring 00:24:34.883 11:29:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:34.883 11:29:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:24:34.883 11:29:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:24:34.883 11:29:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 681201 ']' 00:24:34.883 11:29:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 681201 00:24:34.883 11:29:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 681201 ']' 00:24:34.883 11:29:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 681201 00:24:34.883 11:29:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:24:34.883 11:29:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:34.883 11:29:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 681201 00:24:34.883 11:29:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:24:34.883 11:29:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:24:34.883 11:29:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 681201' 00:24:34.883 killing process with pid 681201 00:24:34.883 11:29:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 681201 00:24:34.883 11:29:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 681201 00:24:35.143 11:29:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:35.143 11:29:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:35.143 11:29:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:35.143 11:29:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:35.143 11:29:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:35.143 11:29:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.143 11:29:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:35.143 11:29:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.679 11:29:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:37.679 00:24:37.679 real 0m15.836s 00:24:37.679 user 0m45.914s 00:24:37.679 sys 0m7.383s 00:24:37.679 11:29:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:37.679 11:29:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:37.679 ************************************ 00:24:37.679 END TEST nvmf_target_disconnect 00:24:37.679 ************************************ 00:24:37.679 11:29:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:37.679 11:29:03 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:24:37.679 11:29:03 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:37.679 11:29:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:37.679 11:29:03 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:24:37.679 00:24:37.679 real 19m10.985s 00:24:37.679 user 46m30.147s 00:24:37.679 sys 5m18.319s 00:24:37.679 11:29:03 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:37.679 11:29:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:37.679 ************************************ 00:24:37.679 END TEST nvmf_tcp 00:24:37.679 ************************************ 00:24:37.679 11:29:03 -- common/autotest_common.sh@1142 -- # return 0 00:24:37.679 11:29:03 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:24:37.679 11:29:03 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:37.679 11:29:03 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:37.679 11:29:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:37.679 11:29:03 -- common/autotest_common.sh@10 -- # set +x 00:24:37.679 ************************************ 00:24:37.679 START TEST spdkcli_nvmf_tcp 00:24:37.679 ************************************ 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:37.679 * Looking for test storage... 00:24:37.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=682359 00:24:37.679 11:29:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:24:37.680 11:29:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 682359 00:24:37.680 11:29:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 682359 ']' 00:24:37.680 11:29:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.680 11:29:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:37.680 11:29:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.680 11:29:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:37.680 11:29:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:37.680 [2024-07-12 11:29:03.485722] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:24:37.680 [2024-07-12 11:29:03.485820] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid682359 ] 00:24:37.680 EAL: No free 2048 kB hugepages reported on node 1 00:24:37.680 [2024-07-12 11:29:03.544517] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:37.680 [2024-07-12 11:29:03.652410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:37.680 [2024-07-12 11:29:03.652414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.680 11:29:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:37.680 11:29:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:24:37.680 11:29:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:24:37.680 11:29:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:37.680 11:29:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:37.680 11:29:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:24:37.680 11:29:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:24:37.680 11:29:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:24:37.680 11:29:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:37.680 11:29:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:37.680 11:29:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:24:37.680 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:24:37.680 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:24:37.680 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:24:37.680 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:24:37.680 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:24:37.680 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:24:37.680 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:37.680 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:24:37.680 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:24:37.680 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:37.680 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:37.680 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:24:37.680 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:37.680 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:37.680 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:24:37.680 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:37.680 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:37.680 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:37.680 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:37.680 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:24:37.680 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:24:37.680 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:37.680 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:24:37.680 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:37.680 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:24:37.680 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:24:37.680 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:24:37.680 ' 00:24:40.209 [2024-07-12 11:29:06.277293] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:41.582 [2024-07-12 11:29:07.517529] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:24:44.110 [2024-07-12 11:29:09.816530] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:24:46.013 [2024-07-12 11:29:11.806834] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:24:47.387 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:24:47.387 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:24:47.387 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:24:47.387 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:24:47.387 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:24:47.387 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:24:47.387 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:24:47.387 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:24:47.387 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:24:47.387 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:24:47.387 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:24:47.387 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:47.387 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:24:47.387 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:24:47.387 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:47.388 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:24:47.388 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:24:47.388 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:24:47.388 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:24:47.388 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:47.388 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:24:47.388 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:24:47.388 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:24:47.388 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:24:47.388 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:47.388 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:24:47.388 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:24:47.388 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:24:47.388 11:29:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:24:47.388 11:29:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:47.388 11:29:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:47.388 11:29:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:24:47.388 11:29:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:47.388 11:29:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:47.388 11:29:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:24:47.388 11:29:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:24:47.953 11:29:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:24:47.953 11:29:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:24:47.953 11:29:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:24:47.953 11:29:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:47.953 11:29:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:47.953 11:29:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:24:47.953 11:29:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:47.953 11:29:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:47.953 11:29:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:24:47.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:24:47.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:24:47.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:24:47.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:24:47.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:24:47.953 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:24:47.953 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:24:47.953 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:24:47.953 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:24:47.953 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:24:47.953 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:24:47.953 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:24:47.953 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:24:47.953 ' 00:24:53.219 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:24:53.219 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:24:53.219 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:24:53.219 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:24:53.219 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:24:53.219 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:24:53.219 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:24:53.219 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:24:53.219 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:24:53.219 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:24:53.219 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:24:53.219 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:24:53.219 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:24:53.219 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:24:53.219 11:29:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:24:53.219 11:29:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:53.219 11:29:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:53.219 11:29:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 682359 00:24:53.219 11:29:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 682359 ']' 00:24:53.219 11:29:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 682359 00:24:53.219 11:29:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:24:53.219 11:29:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:53.219 11:29:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 682359 00:24:53.219 11:29:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:53.219 11:29:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:53.219 11:29:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 682359' 00:24:53.219 killing process with pid 682359 00:24:53.219 11:29:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 682359 00:24:53.219 11:29:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 682359 00:24:53.478 11:29:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:24:53.478 11:29:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:24:53.478 11:29:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 682359 ']' 00:24:53.478 11:29:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 682359 00:24:53.478 11:29:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 682359 ']' 00:24:53.478 11:29:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 682359 00:24:53.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (682359) - No such process 00:24:53.478 11:29:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 682359 is not found' 00:24:53.478 Process with pid 682359 is not found 00:24:53.478 11:29:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:24:53.478 11:29:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:24:53.478 11:29:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:24:53.478 00:24:53.478 real 0m16.062s 00:24:53.478 user 0m34.013s 00:24:53.478 sys 0m0.776s 00:24:53.478 11:29:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:53.478 11:29:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:53.478 ************************************ 00:24:53.478 END TEST spdkcli_nvmf_tcp 00:24:53.478 ************************************ 00:24:53.478 11:29:19 -- common/autotest_common.sh@1142 -- # return 0 00:24:53.478 11:29:19 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:24:53.478 11:29:19 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:53.478 11:29:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:53.478 11:29:19 -- common/autotest_common.sh@10 -- # set +x 00:24:53.478 ************************************ 00:24:53.478 START TEST nvmf_identify_passthru 00:24:53.478 ************************************ 00:24:53.478 11:29:19 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:24:53.478 * Looking for test storage... 00:24:53.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:53.478 11:29:19 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:53.478 11:29:19 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:24:53.478 11:29:19 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:53.478 11:29:19 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:53.478 11:29:19 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:53.478 11:29:19 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:53.478 11:29:19 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:53.478 11:29:19 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:53.478 11:29:19 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:53.478 11:29:19 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:53.478 11:29:19 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:53.478 11:29:19 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:53.478 11:29:19 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:53.478 11:29:19 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:53.478 11:29:19 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:53.478 11:29:19 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:53.479 11:29:19 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:53.479 11:29:19 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:53.479 11:29:19 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:53.479 11:29:19 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:53.479 11:29:19 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:53.479 11:29:19 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:53.479 11:29:19 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.479 11:29:19 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.479 11:29:19 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.479 11:29:19 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:24:53.479 11:29:19 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.479 11:29:19 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:24:53.479 11:29:19 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:53.479 11:29:19 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:53.479 11:29:19 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:53.479 11:29:19 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:53.479 11:29:19 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:53.479 11:29:19 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:53.479 11:29:19 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:53.479 11:29:19 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:53.479 11:29:19 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:53.479 11:29:19 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:53.479 11:29:19 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:53.479 11:29:19 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:53.479 11:29:19 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.479 11:29:19 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.479 11:29:19 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.479 11:29:19 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:24:53.479 11:29:19 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.479 11:29:19 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:24:53.479 11:29:19 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:53.479 11:29:19 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:53.479 11:29:19 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:53.479 11:29:19 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:53.479 11:29:19 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:53.479 11:29:19 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.479 11:29:19 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:53.479 11:29:19 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.479 11:29:19 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:53.479 11:29:19 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:53.479 11:29:19 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:24:53.479 11:29:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:55.381 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:55.381 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:24:55.381 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:55.381 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:55.381 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:55.381 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:55.381 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:55.381 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:24:55.381 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:55.381 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:24:55.381 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:24:55.381 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:24:55.381 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:24:55.381 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:24:55.381 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:24:55.381 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:55.381 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:55.381 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:55.381 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:55.381 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:55.381 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:55.381 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:55.381 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:55.381 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:55.381 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:55.381 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:55.381 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:55.382 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:55.382 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:55.382 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:55.382 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:55.382 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:55.640 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:55.640 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:55.640 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:55.640 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:55.640 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:55.640 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:55.640 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:55.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:55.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:24:55.640 00:24:55.640 --- 10.0.0.2 ping statistics --- 00:24:55.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.640 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:24:55.640 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:55.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:55.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:24:55.640 00:24:55.640 --- 10.0.0.1 ping statistics --- 00:24:55.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.640 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:24:55.640 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:55.640 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:24:55.640 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:55.640 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:55.640 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:55.640 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:55.640 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:55.640 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:55.640 11:29:21 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:55.640 11:29:21 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:24:55.640 11:29:21 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:55.640 11:29:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:55.640 11:29:21 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:24:55.640 11:29:21 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:24:55.640 11:29:21 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:24:55.640 11:29:21 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:24:55.640 11:29:21 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:24:55.640 11:29:21 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:24:55.640 11:29:21 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:24:55.640 11:29:21 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:24:55.640 11:29:21 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:55.640 11:29:21 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:24:55.640 11:29:21 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:24:55.640 11:29:21 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:24:55.640 11:29:21 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:24:55.640 11:29:21 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:24:55.640 11:29:21 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:24:55.640 11:29:21 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:24:55.640 11:29:21 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:24:55.640 11:29:21 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:24:55.640 EAL: No free 2048 kB hugepages reported on node 1 00:24:59.855 11:29:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:24:59.855 11:29:25 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:24:59.855 11:29:25 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:24:59.855 11:29:25 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:24:59.855 EAL: No free 2048 kB hugepages reported on node 1 00:25:04.037 11:29:30 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:25:04.038 11:29:30 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:04.038 11:29:30 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:04.038 11:29:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:04.038 11:29:30 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:04.038 11:29:30 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:04.038 11:29:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:04.038 11:29:30 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=686757 00:25:04.038 11:29:30 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:04.038 11:29:30 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:04.038 11:29:30 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 686757 00:25:04.038 11:29:30 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 686757 ']' 00:25:04.038 11:29:30 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:04.038 11:29:30 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:04.038 11:29:30 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:04.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:04.038 11:29:30 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:04.038 11:29:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:04.038 [2024-07-12 11:29:30.158054] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:25:04.038 [2024-07-12 11:29:30.158153] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:04.296 EAL: No free 2048 kB hugepages reported on node 1 00:25:04.296 [2024-07-12 11:29:30.224640] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:04.296 [2024-07-12 11:29:30.335711] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:04.296 [2024-07-12 11:29:30.335777] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:04.296 [2024-07-12 11:29:30.335805] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:04.296 [2024-07-12 11:29:30.335816] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:04.296 [2024-07-12 11:29:30.335826] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:04.296 [2024-07-12 11:29:30.337897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:04.296 [2024-07-12 11:29:30.337925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:04.296 [2024-07-12 11:29:30.337988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:04.296 [2024-07-12 11:29:30.337991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:04.296 11:29:30 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:04.296 11:29:30 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:25:04.296 11:29:30 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:04.296 11:29:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.296 11:29:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:04.296 INFO: Log level set to 20 00:25:04.296 INFO: Requests: 00:25:04.296 { 00:25:04.296 "jsonrpc": "2.0", 00:25:04.296 "method": "nvmf_set_config", 00:25:04.296 "id": 1, 00:25:04.296 "params": { 00:25:04.296 "admin_cmd_passthru": { 00:25:04.296 "identify_ctrlr": true 00:25:04.296 } 00:25:04.296 } 00:25:04.296 } 00:25:04.296 00:25:04.296 INFO: response: 00:25:04.296 { 00:25:04.296 "jsonrpc": "2.0", 00:25:04.296 "id": 1, 00:25:04.296 "result": true 00:25:04.296 } 00:25:04.296 00:25:04.296 11:29:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.296 11:29:30 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:04.296 11:29:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.296 11:29:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:04.296 INFO: Setting log level to 20 00:25:04.296 INFO: Setting log level to 20 00:25:04.296 INFO: Log level set to 20 00:25:04.296 INFO: Log level set to 20 00:25:04.296 INFO: Requests: 00:25:04.296 { 00:25:04.296 "jsonrpc": "2.0", 00:25:04.296 "method": "framework_start_init", 00:25:04.296 "id": 1 00:25:04.296 } 00:25:04.296 00:25:04.296 INFO: Requests: 00:25:04.296 { 00:25:04.296 "jsonrpc": "2.0", 00:25:04.296 "method": "framework_start_init", 00:25:04.296 "id": 1 00:25:04.296 } 00:25:04.296 00:25:04.555 [2024-07-12 11:29:30.484114] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:04.555 INFO: response: 00:25:04.555 { 00:25:04.555 "jsonrpc": "2.0", 00:25:04.555 "id": 1, 00:25:04.555 "result": true 00:25:04.555 } 00:25:04.555 00:25:04.555 INFO: response: 00:25:04.555 { 00:25:04.555 "jsonrpc": "2.0", 00:25:04.555 "id": 1, 00:25:04.555 "result": true 00:25:04.555 } 00:25:04.555 00:25:04.555 11:29:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.555 11:29:30 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:04.555 11:29:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.555 11:29:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:04.555 INFO: Setting log level to 40 00:25:04.555 INFO: Setting log level to 40 00:25:04.555 INFO: Setting log level to 40 00:25:04.555 [2024-07-12 11:29:30.494110] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:04.555 11:29:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.555 11:29:30 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:04.555 11:29:30 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:04.555 11:29:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:04.555 11:29:30 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:25:04.555 11:29:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.555 11:29:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:07.832 Nvme0n1 00:25:07.832 11:29:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.832 11:29:33 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:07.832 11:29:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.832 11:29:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:07.832 11:29:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.832 11:29:33 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:07.832 11:29:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.832 11:29:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:07.832 11:29:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.832 11:29:33 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:07.832 11:29:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.832 11:29:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:07.832 [2024-07-12 11:29:33.384109] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:07.832 11:29:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.832 11:29:33 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:07.832 11:29:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.832 11:29:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:07.832 [ 00:25:07.832 { 00:25:07.832 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:07.832 "subtype": "Discovery", 00:25:07.832 "listen_addresses": [], 00:25:07.832 "allow_any_host": true, 00:25:07.832 "hosts": [] 00:25:07.832 }, 00:25:07.832 { 00:25:07.832 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:07.832 "subtype": "NVMe", 00:25:07.832 "listen_addresses": [ 00:25:07.832 { 00:25:07.832 "trtype": "TCP", 00:25:07.832 "adrfam": "IPv4", 00:25:07.832 "traddr": "10.0.0.2", 00:25:07.832 "trsvcid": "4420" 00:25:07.832 } 00:25:07.832 ], 00:25:07.832 "allow_any_host": true, 00:25:07.832 "hosts": [], 00:25:07.832 "serial_number": "SPDK00000000000001", 00:25:07.832 "model_number": "SPDK bdev Controller", 00:25:07.832 "max_namespaces": 1, 00:25:07.832 "min_cntlid": 1, 00:25:07.832 "max_cntlid": 65519, 00:25:07.832 "namespaces": [ 00:25:07.832 { 00:25:07.832 "nsid": 1, 00:25:07.832 "bdev_name": "Nvme0n1", 00:25:07.832 "name": "Nvme0n1", 00:25:07.832 "nguid": "F6AA0E43B09E40DE982554BF86510616", 00:25:07.832 "uuid": "f6aa0e43-b09e-40de-9825-54bf86510616" 00:25:07.832 } 00:25:07.832 ] 00:25:07.832 } 00:25:07.832 ] 00:25:07.832 11:29:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.832 11:29:33 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:07.832 11:29:33 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:07.832 11:29:33 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:07.832 EAL: No free 2048 kB hugepages reported on node 1 00:25:07.832 11:29:33 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:25:07.832 11:29:33 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:07.832 11:29:33 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:07.832 11:29:33 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:07.832 EAL: No free 2048 kB hugepages reported on node 1 00:25:07.833 11:29:33 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:25:07.833 11:29:33 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:25:07.833 11:29:33 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:25:07.833 11:29:33 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:07.833 11:29:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.833 11:29:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:07.833 11:29:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.833 11:29:33 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:07.833 11:29:33 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:07.833 11:29:33 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:07.833 11:29:33 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:25:07.833 11:29:33 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:07.833 11:29:33 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:25:07.833 11:29:33 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:07.833 11:29:33 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:07.833 rmmod nvme_tcp 00:25:07.833 rmmod nvme_fabrics 00:25:07.833 rmmod nvme_keyring 00:25:07.833 11:29:33 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:07.833 11:29:33 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:25:07.833 11:29:33 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:25:07.833 11:29:33 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 686757 ']' 00:25:07.833 11:29:33 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 686757 00:25:07.833 11:29:33 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 686757 ']' 00:25:07.833 11:29:33 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 686757 00:25:07.833 11:29:33 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:25:07.833 11:29:33 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:07.833 11:29:33 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 686757 00:25:07.833 11:29:33 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:07.833 11:29:33 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:07.833 11:29:33 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 686757' 00:25:07.833 killing process with pid 686757 00:25:07.833 11:29:33 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 686757 00:25:07.833 11:29:33 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 686757 00:25:09.732 11:29:35 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:09.732 11:29:35 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:09.732 11:29:35 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:09.732 11:29:35 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:09.732 11:29:35 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:09.732 11:29:35 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.732 11:29:35 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:09.732 11:29:35 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.637 11:29:37 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:11.637 00:25:11.637 real 0m18.112s 00:25:11.637 user 0m27.068s 00:25:11.637 sys 0m2.300s 00:25:11.637 11:29:37 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:11.637 11:29:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:11.637 ************************************ 00:25:11.637 END TEST nvmf_identify_passthru 00:25:11.637 ************************************ 00:25:11.637 11:29:37 -- common/autotest_common.sh@1142 -- # return 0 00:25:11.637 11:29:37 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:11.637 11:29:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:11.637 11:29:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:11.637 11:29:37 -- common/autotest_common.sh@10 -- # set +x 00:25:11.637 ************************************ 00:25:11.637 START TEST nvmf_dif 00:25:11.637 ************************************ 00:25:11.637 11:29:37 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:11.637 * Looking for test storage... 00:25:11.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:11.637 11:29:37 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:11.637 11:29:37 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:25:11.637 11:29:37 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:11.637 11:29:37 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:11.637 11:29:37 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:11.637 11:29:37 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:11.637 11:29:37 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:11.637 11:29:37 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:11.637 11:29:37 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:11.637 11:29:37 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:11.637 11:29:37 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:11.637 11:29:37 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:11.637 11:29:37 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:11.637 11:29:37 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:11.637 11:29:37 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:11.637 11:29:37 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:11.637 11:29:37 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:11.637 11:29:37 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:11.637 11:29:37 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:11.637 11:29:37 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:11.637 11:29:37 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:11.637 11:29:37 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:11.637 11:29:37 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.638 11:29:37 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.638 11:29:37 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.638 11:29:37 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:25:11.638 11:29:37 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.638 11:29:37 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:25:11.638 11:29:37 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:11.638 11:29:37 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:11.638 11:29:37 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:11.638 11:29:37 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:11.638 11:29:37 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:11.638 11:29:37 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:11.638 11:29:37 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:11.638 11:29:37 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:11.638 11:29:37 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:25:11.638 11:29:37 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:11.638 11:29:37 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:11.638 11:29:37 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:25:11.638 11:29:37 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:25:11.638 11:29:37 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:11.638 11:29:37 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:11.638 11:29:37 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:11.638 11:29:37 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:11.638 11:29:37 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:11.638 11:29:37 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.638 11:29:37 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:11.638 11:29:37 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.638 11:29:37 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:11.638 11:29:37 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:11.638 11:29:37 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:25:11.638 11:29:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:14.169 11:29:39 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:14.169 11:29:39 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:25:14.169 11:29:39 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:14.169 11:29:39 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:14.169 11:29:39 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:14.169 11:29:39 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:14.169 11:29:39 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:14.169 11:29:39 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:25:14.169 11:29:39 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:14.169 11:29:39 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:25:14.169 11:29:39 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:25:14.169 11:29:39 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:25:14.169 11:29:39 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:25:14.169 11:29:39 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:25:14.169 11:29:39 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:25:14.169 11:29:39 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:14.169 11:29:39 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:14.169 11:29:39 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:14.169 11:29:39 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:14.169 11:29:39 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:14.169 11:29:39 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:14.169 11:29:39 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:14.170 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:14.170 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:14.170 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:14.170 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:14.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:14.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:25:14.170 00:25:14.170 --- 10.0.0.2 ping statistics --- 00:25:14.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.170 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:14.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:14.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:25:14.170 00:25:14.170 --- 10.0.0.1 ping statistics --- 00:25:14.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.170 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:25:14.170 11:29:39 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:15.102 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:15.102 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:15.102 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:15.103 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:15.103 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:15.103 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:15.103 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:15.103 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:15.103 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:15.103 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:15.103 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:15.103 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:15.103 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:15.103 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:15.103 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:15.103 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:15.103 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:15.103 11:29:41 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:15.103 11:29:41 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:15.103 11:29:41 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:15.103 11:29:41 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:15.103 11:29:41 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:15.103 11:29:41 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:15.103 11:29:41 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:15.103 11:29:41 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:25:15.103 11:29:41 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:15.103 11:29:41 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:15.103 11:29:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:15.103 11:29:41 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=689968 00:25:15.103 11:29:41 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:15.103 11:29:41 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 689968 00:25:15.103 11:29:41 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 689968 ']' 00:25:15.103 11:29:41 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:15.103 11:29:41 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:15.103 11:29:41 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:15.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:15.103 11:29:41 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:15.103 11:29:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:15.103 [2024-07-12 11:29:41.186827] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:25:15.103 [2024-07-12 11:29:41.186925] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:15.103 EAL: No free 2048 kB hugepages reported on node 1 00:25:15.359 [2024-07-12 11:29:41.251549] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.359 [2024-07-12 11:29:41.356917] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:15.359 [2024-07-12 11:29:41.356972] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:15.359 [2024-07-12 11:29:41.356985] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:15.359 [2024-07-12 11:29:41.356996] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:15.359 [2024-07-12 11:29:41.357006] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:15.359 [2024-07-12 11:29:41.357037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.359 11:29:41 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:15.359 11:29:41 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:25:15.359 11:29:41 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:15.359 11:29:41 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:15.359 11:29:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:15.360 11:29:41 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:15.360 11:29:41 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:25:15.360 11:29:41 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:15.360 11:29:41 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.360 11:29:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:15.360 [2024-07-12 11:29:41.485945] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:15.360 11:29:41 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.360 11:29:41 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:15.360 11:29:41 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:15.360 11:29:41 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:15.360 11:29:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:15.618 ************************************ 00:25:15.618 START TEST fio_dif_1_default 00:25:15.618 ************************************ 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:15.618 bdev_null0 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:15.618 [2024-07-12 11:29:41.522185] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:15.618 { 00:25:15.618 "params": { 00:25:15.618 "name": "Nvme$subsystem", 00:25:15.618 "trtype": "$TEST_TRANSPORT", 00:25:15.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:15.618 "adrfam": "ipv4", 00:25:15.618 "trsvcid": "$NVMF_PORT", 00:25:15.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:15.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:15.618 "hdgst": ${hdgst:-false}, 00:25:15.618 "ddgst": ${ddgst:-false} 00:25:15.618 }, 00:25:15.618 "method": "bdev_nvme_attach_controller" 00:25:15.618 } 00:25:15.618 EOF 00:25:15.618 )") 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:15.618 "params": { 00:25:15.618 "name": "Nvme0", 00:25:15.618 "trtype": "tcp", 00:25:15.618 "traddr": "10.0.0.2", 00:25:15.618 "adrfam": "ipv4", 00:25:15.618 "trsvcid": "4420", 00:25:15.618 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:15.618 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:15.618 "hdgst": false, 00:25:15.618 "ddgst": false 00:25:15.618 }, 00:25:15.618 "method": "bdev_nvme_attach_controller" 00:25:15.618 }' 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:15.618 11:29:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:15.875 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:15.875 fio-3.35 00:25:15.875 Starting 1 thread 00:25:15.875 EAL: No free 2048 kB hugepages reported on node 1 00:25:28.095 00:25:28.095 filename0: (groupid=0, jobs=1): err= 0: pid=690187: Fri Jul 12 11:29:52 2024 00:25:28.095 read: IOPS=189, BW=758KiB/s (777kB/s)(7584KiB/10001msec) 00:25:28.095 slat (nsec): min=4061, max=76287, avg=9475.23, stdev=2848.16 00:25:28.095 clat (usec): min=552, max=45402, avg=21068.69, stdev=20357.26 00:25:28.095 lat (usec): min=560, max=45428, avg=21078.17, stdev=20357.15 00:25:28.095 clat percentiles (usec): 00:25:28.095 | 1.00th=[ 578], 5.00th=[ 586], 10.00th=[ 594], 20.00th=[ 611], 00:25:28.095 | 30.00th=[ 635], 40.00th=[ 668], 50.00th=[41157], 60.00th=[41157], 00:25:28.095 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:25:28.095 | 99.00th=[41681], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:25:28.095 | 99.99th=[45351] 00:25:28.095 bw ( KiB/s): min= 672, max= 768, per=100.00%, avg=759.58, stdev=25.78, samples=19 00:25:28.095 iops : min= 168, max= 192, avg=189.89, stdev= 6.45, samples=19 00:25:28.095 lat (usec) : 750=49.37%, 1000=0.42% 00:25:28.095 lat (msec) : 50=50.21% 00:25:28.095 cpu : usr=89.42%, sys=10.33%, ctx=10, majf=0, minf=241 00:25:28.095 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:28.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:28.095 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:28.095 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:28.095 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:28.095 00:25:28.095 Run status group 0 (all jobs): 00:25:28.095 READ: bw=758KiB/s (777kB/s), 758KiB/s-758KiB/s (777kB/s-777kB/s), io=7584KiB (7766kB), run=10001-10001msec 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.095 00:25:28.095 real 0m11.185s 00:25:28.095 user 0m10.116s 00:25:28.095 sys 0m1.329s 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:28.095 ************************************ 00:25:28.095 END TEST fio_dif_1_default 00:25:28.095 ************************************ 00:25:28.095 11:29:52 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:25:28.095 11:29:52 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:25:28.095 11:29:52 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:28.095 11:29:52 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:28.095 11:29:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:28.095 ************************************ 00:25:28.095 START TEST fio_dif_1_multi_subsystems 00:25:28.095 ************************************ 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:28.095 bdev_null0 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:28.095 [2024-07-12 11:29:52.738688] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:28.095 bdev_null1 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:28.095 { 00:25:28.095 "params": { 00:25:28.095 "name": "Nvme$subsystem", 00:25:28.095 "trtype": "$TEST_TRANSPORT", 00:25:28.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:28.095 "adrfam": "ipv4", 00:25:28.095 "trsvcid": "$NVMF_PORT", 00:25:28.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:28.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:28.095 "hdgst": ${hdgst:-false}, 00:25:28.095 "ddgst": ${ddgst:-false} 00:25:28.095 }, 00:25:28.095 "method": "bdev_nvme_attach_controller" 00:25:28.095 } 00:25:28.095 EOF 00:25:28.095 )") 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:28.095 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:28.096 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:25:28.096 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:28.096 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:25:28.096 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:28.096 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:28.096 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:25:28.096 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:28.096 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:25:28.096 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:25:28.096 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:25:28.096 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:25:28.096 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:28.096 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:28.096 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:28.096 { 00:25:28.096 "params": { 00:25:28.096 "name": "Nvme$subsystem", 00:25:28.096 "trtype": "$TEST_TRANSPORT", 00:25:28.096 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:28.096 "adrfam": "ipv4", 00:25:28.096 "trsvcid": "$NVMF_PORT", 00:25:28.096 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:28.096 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:28.096 "hdgst": ${hdgst:-false}, 00:25:28.096 "ddgst": ${ddgst:-false} 00:25:28.096 }, 00:25:28.096 "method": "bdev_nvme_attach_controller" 00:25:28.096 } 00:25:28.096 EOF 00:25:28.096 )") 00:25:28.096 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:25:28.096 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:25:28.096 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:25:28.096 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:25:28.096 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:25:28.096 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:28.096 "params": { 00:25:28.096 "name": "Nvme0", 00:25:28.096 "trtype": "tcp", 00:25:28.096 "traddr": "10.0.0.2", 00:25:28.096 "adrfam": "ipv4", 00:25:28.096 "trsvcid": "4420", 00:25:28.096 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:28.096 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:28.096 "hdgst": false, 00:25:28.096 "ddgst": false 00:25:28.096 }, 00:25:28.096 "method": "bdev_nvme_attach_controller" 00:25:28.096 },{ 00:25:28.096 "params": { 00:25:28.096 "name": "Nvme1", 00:25:28.096 "trtype": "tcp", 00:25:28.096 "traddr": "10.0.0.2", 00:25:28.096 "adrfam": "ipv4", 00:25:28.096 "trsvcid": "4420", 00:25:28.096 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:28.096 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:28.096 "hdgst": false, 00:25:28.096 "ddgst": false 00:25:28.096 }, 00:25:28.096 "method": "bdev_nvme_attach_controller" 00:25:28.096 }' 00:25:28.096 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:28.096 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:28.096 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:28.096 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:28.096 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:28.096 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:28.096 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:28.096 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:28.096 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:28.096 11:29:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:28.096 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:28.096 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:28.096 fio-3.35 00:25:28.096 Starting 2 threads 00:25:28.096 EAL: No free 2048 kB hugepages reported on node 1 00:25:38.049 00:25:38.049 filename0: (groupid=0, jobs=1): err= 0: pid=691539: Fri Jul 12 11:30:03 2024 00:25:38.049 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10009msec) 00:25:38.049 slat (nsec): min=7528, max=28496, avg=9472.08, stdev=2253.96 00:25:38.049 clat (usec): min=40660, max=45798, avg=40990.55, stdev=310.39 00:25:38.049 lat (usec): min=40668, max=45812, avg=41000.02, stdev=310.49 00:25:38.049 clat percentiles (usec): 00:25:38.049 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:25:38.049 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:25:38.049 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:25:38.049 | 99.00th=[41157], 99.50th=[41157], 99.90th=[45876], 99.95th=[45876], 00:25:38.049 | 99.99th=[45876] 00:25:38.049 bw ( KiB/s): min= 384, max= 416, per=47.07%, avg=388.80, stdev=11.72, samples=20 00:25:38.049 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:25:38.049 lat (msec) : 50=100.00% 00:25:38.049 cpu : usr=94.25%, sys=5.49%, ctx=14, majf=0, minf=125 00:25:38.049 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:38.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.049 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.049 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:38.049 filename1: (groupid=0, jobs=1): err= 0: pid=691540: Fri Jul 12 11:30:03 2024 00:25:38.049 read: IOPS=108, BW=435KiB/s (446kB/s)(4368KiB/10036msec) 00:25:38.049 slat (nsec): min=6172, max=30030, avg=9910.23, stdev=2869.49 00:25:38.049 clat (usec): min=547, max=44857, avg=36730.36, stdev=12461.07 00:25:38.049 lat (usec): min=555, max=44872, avg=36740.27, stdev=12461.13 00:25:38.049 clat percentiles (usec): 00:25:38.049 | 1.00th=[ 562], 5.00th=[ 603], 10.00th=[ 668], 20.00th=[41157], 00:25:38.049 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:25:38.049 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:25:38.049 | 99.00th=[41681], 99.50th=[41681], 99.90th=[44827], 99.95th=[44827], 00:25:38.049 | 99.99th=[44827] 00:25:38.049 bw ( KiB/s): min= 384, max= 608, per=52.78%, avg=435.20, stdev=69.95, samples=20 00:25:38.049 iops : min= 96, max= 152, avg=108.80, stdev=17.49, samples=20 00:25:38.049 lat (usec) : 750=10.62% 00:25:38.049 lat (msec) : 50=89.38% 00:25:38.049 cpu : usr=94.17%, sys=5.48%, ctx=32, majf=0, minf=131 00:25:38.049 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:38.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.049 issued rwts: total=1092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.049 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:38.049 00:25:38.049 Run status group 0 (all jobs): 00:25:38.049 READ: bw=824KiB/s (844kB/s), 390KiB/s-435KiB/s (399kB/s-446kB/s), io=8272KiB (8471kB), run=10009-10036msec 00:25:38.049 11:30:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:25:38.049 11:30:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:25:38.049 11:30:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:25:38.049 11:30:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:38.049 11:30:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:25:38.049 11:30:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:38.049 11:30:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.049 11:30:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:38.049 11:30:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.049 11:30:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:38.049 11:30:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.049 11:30:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:38.049 11:30:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.049 11:30:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:25:38.049 11:30:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:38.049 11:30:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:25:38.049 11:30:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:38.050 11:30:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.050 11:30:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:38.050 11:30:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.050 11:30:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:38.050 11:30:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.050 11:30:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:38.050 11:30:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.050 00:25:38.050 real 0m11.431s 00:25:38.050 user 0m20.417s 00:25:38.050 sys 0m1.359s 00:25:38.050 11:30:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:38.050 11:30:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:38.050 ************************************ 00:25:38.050 END TEST fio_dif_1_multi_subsystems 00:25:38.050 ************************************ 00:25:38.050 11:30:04 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:25:38.050 11:30:04 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:25:38.050 11:30:04 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:38.050 11:30:04 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:38.050 11:30:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:38.050 ************************************ 00:25:38.050 START TEST fio_dif_rand_params 00:25:38.050 ************************************ 00:25:38.050 11:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:25:38.050 11:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:25:38.050 11:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:25:38.050 11:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:25:38.050 11:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:25:38.050 11:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:25:38.050 11:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:25:38.050 11:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:25:38.050 11:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:25:38.050 11:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:25:38.050 11:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:38.050 11:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:25:38.050 11:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:25:38.050 11:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:38.050 11:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.050 11:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:38.050 bdev_null0 00:25:38.050 11:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.050 11:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:38.050 11:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.050 11:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:38.318 [2024-07-12 11:30:04.199283] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:38.318 { 00:25:38.318 "params": { 00:25:38.318 "name": "Nvme$subsystem", 00:25:38.318 "trtype": "$TEST_TRANSPORT", 00:25:38.318 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:38.318 "adrfam": "ipv4", 00:25:38.318 "trsvcid": "$NVMF_PORT", 00:25:38.318 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:38.318 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:38.318 "hdgst": ${hdgst:-false}, 00:25:38.318 "ddgst": ${ddgst:-false} 00:25:38.318 }, 00:25:38.318 "method": "bdev_nvme_attach_controller" 00:25:38.318 } 00:25:38.318 EOF 00:25:38.318 )") 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:38.318 "params": { 00:25:38.318 "name": "Nvme0", 00:25:38.318 "trtype": "tcp", 00:25:38.318 "traddr": "10.0.0.2", 00:25:38.318 "adrfam": "ipv4", 00:25:38.318 "trsvcid": "4420", 00:25:38.318 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:38.318 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:38.318 "hdgst": false, 00:25:38.318 "ddgst": false 00:25:38.318 }, 00:25:38.318 "method": "bdev_nvme_attach_controller" 00:25:38.318 }' 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:38.318 11:30:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:38.576 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:38.576 ... 00:25:38.576 fio-3.35 00:25:38.576 Starting 3 threads 00:25:38.576 EAL: No free 2048 kB hugepages reported on node 1 00:25:45.127 00:25:45.127 filename0: (groupid=0, jobs=1): err= 0: pid=692999: Fri Jul 12 11:30:10 2024 00:25:45.127 read: IOPS=212, BW=26.6MiB/s (27.9MB/s)(134MiB/5044msec) 00:25:45.127 slat (nsec): min=4615, max=38345, avg=16818.12, stdev=3986.61 00:25:45.127 clat (usec): min=7201, max=47054, avg=14029.84, stdev=2362.73 00:25:45.127 lat (usec): min=7221, max=47074, avg=14046.66, stdev=2363.00 00:25:45.127 clat percentiles (usec): 00:25:45.127 | 1.00th=[ 8586], 5.00th=[10421], 10.00th=[11338], 20.00th=[12518], 00:25:45.127 | 30.00th=[13304], 40.00th=[13829], 50.00th=[14222], 60.00th=[14615], 00:25:45.127 | 70.00th=[15008], 80.00th=[15533], 90.00th=[16188], 95.00th=[16712], 00:25:45.127 | 99.00th=[17695], 99.50th=[18220], 99.90th=[45351], 99.95th=[46924], 00:25:45.127 | 99.99th=[46924] 00:25:45.127 bw ( KiB/s): min=25344, max=29755, per=31.01%, avg=27423.50, stdev=1475.68, samples=10 00:25:45.127 iops : min= 198, max= 232, avg=214.20, stdev=11.45, samples=10 00:25:45.127 lat (msec) : 10=4.00%, 20=95.81%, 50=0.19% 00:25:45.127 cpu : usr=94.59%, sys=4.92%, ctx=7, majf=0, minf=88 00:25:45.127 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:45.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:45.127 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:45.127 issued rwts: total=1074,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:45.127 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:45.127 filename0: (groupid=0, jobs=1): err= 0: pid=693000: Fri Jul 12 11:30:10 2024 00:25:45.127 read: IOPS=241, BW=30.2MiB/s (31.7MB/s)(152MiB/5028msec) 00:25:45.127 slat (nsec): min=4441, max=82783, avg=18512.51, stdev=4665.81 00:25:45.127 clat (usec): min=7418, max=54768, avg=12380.20, stdev=3765.01 00:25:45.127 lat (usec): min=7434, max=54794, avg=12398.71, stdev=3764.89 00:25:45.127 clat percentiles (usec): 00:25:45.127 | 1.00th=[ 8455], 5.00th=[ 9896], 10.00th=[10421], 20.00th=[10814], 00:25:45.127 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11863], 60.00th=[12256], 00:25:45.127 | 70.00th=[12780], 80.00th=[13304], 90.00th=[14222], 95.00th=[14877], 00:25:45.127 | 99.00th=[16909], 99.50th=[50594], 99.90th=[54264], 99.95th=[54789], 00:25:45.127 | 99.99th=[54789] 00:25:45.127 bw ( KiB/s): min=26112, max=33792, per=35.11%, avg=31052.80, stdev=2312.05, samples=10 00:25:45.127 iops : min= 204, max= 264, avg=242.60, stdev=18.06, samples=10 00:25:45.127 lat (msec) : 10=5.92%, 20=93.09%, 50=0.33%, 100=0.66% 00:25:45.127 cpu : usr=94.55%, sys=4.89%, ctx=9, majf=0, minf=127 00:25:45.127 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:45.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:45.127 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:45.127 issued rwts: total=1216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:45.127 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:45.127 filename0: (groupid=0, jobs=1): err= 0: pid=693001: Fri Jul 12 11:30:10 2024 00:25:45.127 read: IOPS=238, BW=29.9MiB/s (31.3MB/s)(149MiB/5004msec) 00:25:45.127 slat (nsec): min=3849, max=48906, avg=18869.89, stdev=4088.25 00:25:45.127 clat (usec): min=6418, max=53169, avg=12538.00, stdev=3254.15 00:25:45.127 lat (usec): min=6437, max=53191, avg=12556.87, stdev=3254.44 00:25:45.127 clat percentiles (usec): 00:25:45.127 | 1.00th=[ 8029], 5.00th=[ 9896], 10.00th=[10552], 20.00th=[11076], 00:25:45.127 | 30.00th=[11469], 40.00th=[11863], 50.00th=[12256], 60.00th=[12649], 00:25:45.127 | 70.00th=[13173], 80.00th=[13698], 90.00th=[14484], 95.00th=[15139], 00:25:45.127 | 99.00th=[16909], 99.50th=[50594], 99.90th=[53216], 99.95th=[53216], 00:25:45.127 | 99.99th=[53216] 00:25:45.127 bw ( KiB/s): min=27904, max=32256, per=34.50%, avg=30515.20, stdev=1722.80, samples=10 00:25:45.127 iops : min= 218, max= 252, avg=238.40, stdev=13.46, samples=10 00:25:45.127 lat (msec) : 10=5.61%, 20=93.89%, 100=0.50% 00:25:45.127 cpu : usr=94.06%, sys=5.40%, ctx=19, majf=0, minf=106 00:25:45.127 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:45.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:45.127 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:45.127 issued rwts: total=1195,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:45.127 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:45.127 00:25:45.127 Run status group 0 (all jobs): 00:25:45.127 READ: bw=86.4MiB/s (90.6MB/s), 26.6MiB/s-30.2MiB/s (27.9MB/s-31.7MB/s), io=436MiB (457MB), run=5004-5044msec 00:25:45.127 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:25:45.127 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:25:45.127 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:45.127 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:45.127 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:25:45.127 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:45.127 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.127 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:45.127 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.127 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:45.127 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.127 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:45.127 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.127 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:25:45.127 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:25:45.127 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:25:45.127 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:25:45.127 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:25:45.127 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:25:45.127 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:25:45.127 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:25:45.127 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:45.127 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:45.128 bdev_null0 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:45.128 [2024-07-12 11:30:10.361718] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:45.128 bdev_null1 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:45.128 bdev_null2 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:45.128 { 00:25:45.128 "params": { 00:25:45.128 "name": "Nvme$subsystem", 00:25:45.128 "trtype": "$TEST_TRANSPORT", 00:25:45.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:45.128 "adrfam": "ipv4", 00:25:45.128 "trsvcid": "$NVMF_PORT", 00:25:45.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:45.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:45.128 "hdgst": ${hdgst:-false}, 00:25:45.128 "ddgst": ${ddgst:-false} 00:25:45.128 }, 00:25:45.128 "method": "bdev_nvme_attach_controller" 00:25:45.128 } 00:25:45.128 EOF 00:25:45.128 )") 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:45.128 { 00:25:45.128 "params": { 00:25:45.128 "name": "Nvme$subsystem", 00:25:45.128 "trtype": "$TEST_TRANSPORT", 00:25:45.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:45.128 "adrfam": "ipv4", 00:25:45.128 "trsvcid": "$NVMF_PORT", 00:25:45.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:45.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:45.128 "hdgst": ${hdgst:-false}, 00:25:45.128 "ddgst": ${ddgst:-false} 00:25:45.128 }, 00:25:45.128 "method": "bdev_nvme_attach_controller" 00:25:45.128 } 00:25:45.128 EOF 00:25:45.128 )") 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:45.128 { 00:25:45.128 "params": { 00:25:45.128 "name": "Nvme$subsystem", 00:25:45.128 "trtype": "$TEST_TRANSPORT", 00:25:45.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:45.128 "adrfam": "ipv4", 00:25:45.128 "trsvcid": "$NVMF_PORT", 00:25:45.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:45.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:45.128 "hdgst": ${hdgst:-false}, 00:25:45.128 "ddgst": ${ddgst:-false} 00:25:45.128 }, 00:25:45.128 "method": "bdev_nvme_attach_controller" 00:25:45.128 } 00:25:45.128 EOF 00:25:45.128 )") 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:25:45.128 11:30:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:45.128 "params": { 00:25:45.128 "name": "Nvme0", 00:25:45.128 "trtype": "tcp", 00:25:45.128 "traddr": "10.0.0.2", 00:25:45.128 "adrfam": "ipv4", 00:25:45.128 "trsvcid": "4420", 00:25:45.128 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:45.128 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:45.128 "hdgst": false, 00:25:45.128 "ddgst": false 00:25:45.128 }, 00:25:45.128 "method": "bdev_nvme_attach_controller" 00:25:45.128 },{ 00:25:45.128 "params": { 00:25:45.129 "name": "Nvme1", 00:25:45.129 "trtype": "tcp", 00:25:45.129 "traddr": "10.0.0.2", 00:25:45.129 "adrfam": "ipv4", 00:25:45.129 "trsvcid": "4420", 00:25:45.129 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:45.129 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:45.129 "hdgst": false, 00:25:45.129 "ddgst": false 00:25:45.129 }, 00:25:45.129 "method": "bdev_nvme_attach_controller" 00:25:45.129 },{ 00:25:45.129 "params": { 00:25:45.129 "name": "Nvme2", 00:25:45.129 "trtype": "tcp", 00:25:45.129 "traddr": "10.0.0.2", 00:25:45.129 "adrfam": "ipv4", 00:25:45.129 "trsvcid": "4420", 00:25:45.129 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:45.129 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:45.129 "hdgst": false, 00:25:45.129 "ddgst": false 00:25:45.129 }, 00:25:45.129 "method": "bdev_nvme_attach_controller" 00:25:45.129 }' 00:25:45.129 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:45.129 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:45.129 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:45.129 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:45.129 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:45.129 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:45.129 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:45.129 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:45.129 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:45.129 11:30:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:45.129 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:45.129 ... 00:25:45.129 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:45.129 ... 00:25:45.129 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:45.129 ... 00:25:45.129 fio-3.35 00:25:45.129 Starting 24 threads 00:25:45.129 EAL: No free 2048 kB hugepages reported on node 1 00:25:57.313 00:25:57.313 filename0: (groupid=0, jobs=1): err= 0: pid=694340: Fri Jul 12 11:30:21 2024 00:25:57.313 read: IOPS=47, BW=189KiB/s (193kB/s)(1912KiB/10128msec) 00:25:57.313 slat (nsec): min=8395, max=86710, avg=26943.75, stdev=13466.95 00:25:57.313 clat (msec): min=176, max=643, avg=338.63, stdev=66.53 00:25:57.313 lat (msec): min=176, max=643, avg=338.66, stdev=66.53 00:25:57.313 clat percentiles (msec): 00:25:57.313 | 1.00th=[ 178], 5.00th=[ 194], 10.00th=[ 259], 20.00th=[ 330], 00:25:57.313 | 30.00th=[ 338], 40.00th=[ 338], 50.00th=[ 342], 60.00th=[ 347], 00:25:57.313 | 70.00th=[ 351], 80.00th=[ 351], 90.00th=[ 359], 95.00th=[ 422], 00:25:57.313 | 99.00th=[ 600], 99.50th=[ 600], 99.90th=[ 642], 99.95th=[ 642], 00:25:57.313 | 99.99th=[ 642] 00:25:57.313 bw ( KiB/s): min= 128, max= 256, per=3.37%, avg=194.53, stdev=64.94, samples=19 00:25:57.313 iops : min= 32, max= 64, avg=48.63, stdev=16.24, samples=19 00:25:57.313 lat (msec) : 250=8.79%, 500=87.87%, 750=3.35% 00:25:57.313 cpu : usr=98.55%, sys=1.06%, ctx=13, majf=0, minf=25 00:25:57.313 IO depths : 1=4.6%, 2=10.9%, 4=25.1%, 8=51.7%, 16=7.7%, 32=0.0%, >=64=0.0% 00:25:57.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.313 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.313 issued rwts: total=478,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.313 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.313 filename0: (groupid=0, jobs=1): err= 0: pid=694341: Fri Jul 12 11:30:21 2024 00:25:57.313 read: IOPS=55, BW=221KiB/s (226kB/s)(2240KiB/10128msec) 00:25:57.313 slat (nsec): min=8042, max=75095, avg=15177.47, stdev=9297.75 00:25:57.313 clat (msec): min=178, max=492, avg=288.90, stdev=69.12 00:25:57.313 lat (msec): min=178, max=492, avg=288.92, stdev=69.12 00:25:57.313 clat percentiles (msec): 00:25:57.313 | 1.00th=[ 180], 5.00th=[ 192], 10.00th=[ 194], 20.00th=[ 234], 00:25:57.313 | 30.00th=[ 243], 40.00th=[ 255], 50.00th=[ 268], 60.00th=[ 326], 00:25:57.313 | 70.00th=[ 338], 80.00th=[ 351], 90.00th=[ 359], 95.00th=[ 368], 00:25:57.313 | 99.00th=[ 468], 99.50th=[ 489], 99.90th=[ 493], 99.95th=[ 493], 00:25:57.313 | 99.99th=[ 493] 00:25:57.313 bw ( KiB/s): min= 128, max= 368, per=3.77%, avg=217.60, stdev=70.49, samples=20 00:25:57.313 iops : min= 32, max= 92, avg=54.40, stdev=17.62, samples=20 00:25:57.313 lat (msec) : 250=34.29%, 500=65.71% 00:25:57.313 cpu : usr=98.03%, sys=1.41%, ctx=46, majf=0, minf=34 00:25:57.313 IO depths : 1=3.2%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.3%, 32=0.0%, >=64=0.0% 00:25:57.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.313 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.313 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.313 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.313 filename0: (groupid=0, jobs=1): err= 0: pid=694342: Fri Jul 12 11:30:21 2024 00:25:57.313 read: IOPS=68, BW=273KiB/s (280kB/s)(2768KiB/10140msec) 00:25:57.313 slat (nsec): min=7933, max=51739, avg=10428.77, stdev=3890.41 00:25:57.313 clat (msec): min=169, max=354, avg=233.17, stdev=31.92 00:25:57.313 lat (msec): min=169, max=354, avg=233.18, stdev=31.92 00:25:57.313 clat percentiles (msec): 00:25:57.313 | 1.00th=[ 171], 5.00th=[ 182], 10.00th=[ 199], 20.00th=[ 213], 00:25:57.313 | 30.00th=[ 218], 40.00th=[ 226], 50.00th=[ 230], 60.00th=[ 236], 00:25:57.313 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 259], 95.00th=[ 288], 00:25:57.313 | 99.00th=[ 347], 99.50th=[ 355], 99.90th=[ 355], 99.95th=[ 355], 00:25:57.313 | 99.99th=[ 355] 00:25:57.314 bw ( KiB/s): min= 256, max= 304, per=4.70%, avg=270.40, stdev=22.57, samples=20 00:25:57.314 iops : min= 64, max= 76, avg=67.60, stdev= 5.64, samples=20 00:25:57.314 lat (msec) : 250=74.57%, 500=25.43% 00:25:57.314 cpu : usr=98.06%, sys=1.42%, ctx=39, majf=0, minf=23 00:25:57.314 IO depths : 1=0.6%, 2=1.3%, 4=8.1%, 8=77.9%, 16=12.1%, 32=0.0%, >=64=0.0% 00:25:57.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.314 complete : 0=0.0%, 4=89.2%, 8=5.5%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.314 issued rwts: total=692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.314 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.314 filename0: (groupid=0, jobs=1): err= 0: pid=694343: Fri Jul 12 11:30:21 2024 00:25:57.314 read: IOPS=68, BW=274KiB/s (280kB/s)(2776KiB/10141msec) 00:25:57.314 slat (usec): min=8, max=101, avg=13.03, stdev=11.83 00:25:57.314 clat (msec): min=178, max=322, avg=233.41, stdev=31.89 00:25:57.314 lat (msec): min=178, max=322, avg=233.43, stdev=31.89 00:25:57.314 clat percentiles (msec): 00:25:57.314 | 1.00th=[ 180], 5.00th=[ 184], 10.00th=[ 184], 20.00th=[ 197], 00:25:57.314 | 30.00th=[ 222], 40.00th=[ 230], 50.00th=[ 236], 60.00th=[ 245], 00:25:57.314 | 70.00th=[ 251], 80.00th=[ 262], 90.00th=[ 268], 95.00th=[ 268], 00:25:57.314 | 99.00th=[ 321], 99.50th=[ 321], 99.90th=[ 321], 99.95th=[ 321], 00:25:57.314 | 99.99th=[ 321] 00:25:57.314 bw ( KiB/s): min= 256, max= 368, per=4.71%, avg=271.20, stdev=35.01, samples=20 00:25:57.314 iops : min= 64, max= 92, avg=67.80, stdev= 8.75, samples=20 00:25:57.314 lat (msec) : 250=68.88%, 500=31.12% 00:25:57.314 cpu : usr=98.41%, sys=1.15%, ctx=15, majf=0, minf=33 00:25:57.314 IO depths : 1=1.9%, 2=6.2%, 4=19.2%, 8=62.1%, 16=10.7%, 32=0.0%, >=64=0.0% 00:25:57.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.314 complete : 0=0.0%, 4=92.5%, 8=2.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.314 issued rwts: total=694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.314 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.314 filename0: (groupid=0, jobs=1): err= 0: pid=694344: Fri Jul 12 11:30:21 2024 00:25:57.314 read: IOPS=80, BW=320KiB/s (328kB/s)(3256KiB/10172msec) 00:25:57.314 slat (usec): min=3, max=104, avg=22.03, stdev=25.28 00:25:57.314 clat (usec): min=1834, max=344656, avg=199060.17, stdev=79748.13 00:25:57.314 lat (usec): min=1843, max=344665, avg=199082.20, stdev=79752.81 00:25:57.314 clat percentiles (msec): 00:25:57.314 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 22], 20.00th=[ 194], 00:25:57.314 | 30.00th=[ 211], 40.00th=[ 215], 50.00th=[ 226], 60.00th=[ 230], 00:25:57.314 | 70.00th=[ 236], 80.00th=[ 249], 90.00th=[ 255], 95.00th=[ 257], 00:25:57.314 | 99.00th=[ 334], 99.50th=[ 347], 99.90th=[ 347], 99.95th=[ 347], 00:25:57.314 | 99.99th=[ 347] 00:25:57.314 bw ( KiB/s): min= 224, max= 1120, per=5.55%, avg=319.20, stdev=192.10, samples=20 00:25:57.314 iops : min= 56, max= 280, avg=79.80, stdev=48.03, samples=20 00:25:57.314 lat (msec) : 2=0.98%, 4=8.85%, 50=1.11%, 100=3.44%, 250=67.32% 00:25:57.314 lat (msec) : 500=18.30% 00:25:57.314 cpu : usr=97.85%, sys=1.54%, ctx=57, majf=0, minf=73 00:25:57.314 IO depths : 1=0.1%, 2=0.4%, 4=6.1%, 8=80.3%, 16=13.0%, 32=0.0%, >=64=0.0% 00:25:57.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.314 complete : 0=0.0%, 4=88.9%, 8=6.2%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.314 issued rwts: total=814,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.314 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.314 filename0: (groupid=0, jobs=1): err= 0: pid=694345: Fri Jul 12 11:30:21 2024 00:25:57.314 read: IOPS=66, BW=267KiB/s (274kB/s)(2712KiB/10141msec) 00:25:57.314 slat (nsec): min=8045, max=93019, avg=11865.75, stdev=6934.64 00:25:57.314 clat (msec): min=176, max=374, avg=238.95, stdev=45.60 00:25:57.314 lat (msec): min=176, max=374, avg=238.96, stdev=45.60 00:25:57.314 clat percentiles (msec): 00:25:57.314 | 1.00th=[ 178], 5.00th=[ 182], 10.00th=[ 184], 20.00th=[ 186], 00:25:57.314 | 30.00th=[ 218], 40.00th=[ 232], 50.00th=[ 243], 60.00th=[ 251], 00:25:57.314 | 70.00th=[ 257], 80.00th=[ 264], 90.00th=[ 271], 95.00th=[ 347], 00:25:57.314 | 99.00th=[ 376], 99.50th=[ 376], 99.90th=[ 376], 99.95th=[ 376], 00:25:57.314 | 99.99th=[ 376] 00:25:57.314 bw ( KiB/s): min= 128, max= 368, per=4.59%, avg=264.80, stdev=51.51, samples=20 00:25:57.314 iops : min= 32, max= 92, avg=66.20, stdev=12.88, samples=20 00:25:57.314 lat (msec) : 250=58.70%, 500=41.30% 00:25:57.314 cpu : usr=98.27%, sys=1.28%, ctx=13, majf=0, minf=33 00:25:57.314 IO depths : 1=0.4%, 2=4.4%, 4=17.7%, 8=65.0%, 16=12.4%, 32=0.0%, >=64=0.0% 00:25:57.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.314 complete : 0=0.0%, 4=92.0%, 8=2.9%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.314 issued rwts: total=678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.314 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.314 filename0: (groupid=0, jobs=1): err= 0: pid=694346: Fri Jul 12 11:30:21 2024 00:25:57.314 read: IOPS=72, BW=290KiB/s (297kB/s)(2952KiB/10168msec) 00:25:57.314 slat (nsec): min=3999, max=72520, avg=11268.18, stdev=4919.45 00:25:57.314 clat (msec): min=31, max=349, avg=219.05, stdev=47.99 00:25:57.314 lat (msec): min=31, max=349, avg=219.06, stdev=47.99 00:25:57.314 clat percentiles (msec): 00:25:57.314 | 1.00th=[ 32], 5.00th=[ 114], 10.00th=[ 155], 20.00th=[ 199], 00:25:57.314 | 30.00th=[ 215], 40.00th=[ 220], 50.00th=[ 230], 60.00th=[ 236], 00:25:57.314 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 257], 95.00th=[ 262], 00:25:57.314 | 99.00th=[ 317], 99.50th=[ 351], 99.90th=[ 351], 99.95th=[ 351], 00:25:57.314 | 99.99th=[ 351] 00:25:57.314 bw ( KiB/s): min= 256, max= 512, per=5.01%, avg=288.80, stdev=61.74, samples=20 00:25:57.314 iops : min= 64, max= 128, avg=72.20, stdev=15.44, samples=20 00:25:57.314 lat (msec) : 50=2.17%, 100=1.90%, 250=68.56%, 500=27.37% 00:25:57.314 cpu : usr=97.81%, sys=1.61%, ctx=75, majf=0, minf=33 00:25:57.314 IO depths : 1=0.3%, 2=2.7%, 4=13.4%, 8=71.3%, 16=12.3%, 32=0.0%, >=64=0.0% 00:25:57.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.314 complete : 0=0.0%, 4=90.8%, 8=3.7%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.314 issued rwts: total=738,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.314 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.314 filename0: (groupid=0, jobs=1): err= 0: pid=694347: Fri Jul 12 11:30:21 2024 00:25:57.314 read: IOPS=70, BW=283KiB/s (289kB/s)(2872KiB/10165msec) 00:25:57.314 slat (usec): min=4, max=110, avg=17.26, stdev=19.44 00:25:57.314 clat (msec): min=27, max=344, avg=226.10, stdev=54.92 00:25:57.314 lat (msec): min=27, max=345, avg=226.12, stdev=54.93 00:25:57.314 clat percentiles (msec): 00:25:57.314 | 1.00th=[ 28], 5.00th=[ 144], 10.00th=[ 180], 20.00th=[ 188], 00:25:57.314 | 30.00th=[ 194], 40.00th=[ 226], 50.00th=[ 243], 60.00th=[ 245], 00:25:57.314 | 70.00th=[ 257], 80.00th=[ 264], 90.00th=[ 268], 95.00th=[ 317], 00:25:57.314 | 99.00th=[ 347], 99.50th=[ 347], 99.90th=[ 347], 99.95th=[ 347], 00:25:57.314 | 99.99th=[ 347] 00:25:57.314 bw ( KiB/s): min= 144, max= 384, per=4.87%, avg=280.80, stdev=71.08, samples=20 00:25:57.314 iops : min= 36, max= 96, avg=70.20, stdev=17.77, samples=20 00:25:57.314 lat (msec) : 50=2.23%, 100=1.95%, 250=63.23%, 500=32.59% 00:25:57.314 cpu : usr=97.44%, sys=1.78%, ctx=79, majf=0, minf=44 00:25:57.314 IO depths : 1=0.7%, 2=7.0%, 4=25.1%, 8=55.6%, 16=11.7%, 32=0.0%, >=64=0.0% 00:25:57.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.314 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.314 issued rwts: total=718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.314 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.314 filename1: (groupid=0, jobs=1): err= 0: pid=694348: Fri Jul 12 11:30:21 2024 00:25:57.314 read: IOPS=72, BW=292KiB/s (299kB/s)(2968KiB/10172msec) 00:25:57.314 slat (nsec): min=3987, max=60567, avg=10930.65, stdev=4138.07 00:25:57.314 clat (msec): min=34, max=350, avg=218.43, stdev=49.85 00:25:57.314 lat (msec): min=34, max=350, avg=218.44, stdev=49.85 00:25:57.314 clat percentiles (msec): 00:25:57.314 | 1.00th=[ 35], 5.00th=[ 114], 10.00th=[ 178], 20.00th=[ 186], 00:25:57.314 | 30.00th=[ 194], 40.00th=[ 220], 50.00th=[ 232], 60.00th=[ 243], 00:25:57.314 | 70.00th=[ 249], 80.00th=[ 257], 90.00th=[ 266], 95.00th=[ 268], 00:25:57.314 | 99.00th=[ 300], 99.50th=[ 351], 99.90th=[ 351], 99.95th=[ 351], 00:25:57.314 | 99.99th=[ 351] 00:25:57.314 bw ( KiB/s): min= 256, max= 513, per=5.04%, avg=290.45, stdev=67.51, samples=20 00:25:57.314 iops : min= 64, max= 128, avg=72.60, stdev=16.83, samples=20 00:25:57.314 lat (msec) : 50=2.16%, 100=2.16%, 250=65.77%, 500=29.92% 00:25:57.314 cpu : usr=98.03%, sys=1.46%, ctx=48, majf=0, minf=25 00:25:57.314 IO depths : 1=0.9%, 2=6.6%, 4=23.2%, 8=57.7%, 16=11.6%, 32=0.0%, >=64=0.0% 00:25:57.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.314 complete : 0=0.0%, 4=93.7%, 8=0.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.314 issued rwts: total=742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.315 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.315 filename1: (groupid=0, jobs=1): err= 0: pid=694349: Fri Jul 12 11:30:21 2024 00:25:57.315 read: IOPS=68, BW=273KiB/s (280kB/s)(2768KiB/10140msec) 00:25:57.315 slat (nsec): min=8021, max=43249, avg=10761.19, stdev=4110.63 00:25:57.315 clat (msec): min=159, max=349, avg=233.14, stdev=32.58 00:25:57.315 lat (msec): min=159, max=349, avg=233.15, stdev=32.58 00:25:57.315 clat percentiles (msec): 00:25:57.315 | 1.00th=[ 167], 5.00th=[ 180], 10.00th=[ 201], 20.00th=[ 213], 00:25:57.315 | 30.00th=[ 218], 40.00th=[ 226], 50.00th=[ 230], 60.00th=[ 236], 00:25:57.315 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 264], 95.00th=[ 288], 00:25:57.315 | 99.00th=[ 347], 99.50th=[ 351], 99.90th=[ 351], 99.95th=[ 351], 00:25:57.315 | 99.99th=[ 351] 00:25:57.315 bw ( KiB/s): min= 176, max= 336, per=4.70%, avg=270.40, stdev=34.39, samples=20 00:25:57.315 iops : min= 44, max= 84, avg=67.60, stdev= 8.60, samples=20 00:25:57.315 lat (msec) : 250=75.43%, 500=24.57% 00:25:57.315 cpu : usr=98.29%, sys=1.25%, ctx=30, majf=0, minf=33 00:25:57.315 IO depths : 1=0.6%, 2=1.3%, 4=8.1%, 8=77.9%, 16=12.1%, 32=0.0%, >=64=0.0% 00:25:57.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.315 complete : 0=0.0%, 4=89.2%, 8=5.5%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.315 issued rwts: total=692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.315 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.315 filename1: (groupid=0, jobs=1): err= 0: pid=694350: Fri Jul 12 11:30:21 2024 00:25:57.315 read: IOPS=61, BW=247KiB/s (253kB/s)(2504KiB/10141msec) 00:25:57.315 slat (nsec): min=8015, max=55303, avg=13502.04, stdev=8110.84 00:25:57.315 clat (msec): min=178, max=480, avg=257.95, stdev=63.19 00:25:57.315 lat (msec): min=178, max=480, avg=257.96, stdev=63.20 00:25:57.315 clat percentiles (msec): 00:25:57.315 | 1.00th=[ 180], 5.00th=[ 180], 10.00th=[ 188], 20.00th=[ 215], 00:25:57.315 | 30.00th=[ 224], 40.00th=[ 232], 50.00th=[ 243], 60.00th=[ 255], 00:25:57.315 | 70.00th=[ 268], 80.00th=[ 321], 90.00th=[ 342], 95.00th=[ 355], 00:25:57.315 | 99.00th=[ 481], 99.50th=[ 481], 99.90th=[ 481], 99.95th=[ 481], 00:25:57.315 | 99.99th=[ 481] 00:25:57.315 bw ( KiB/s): min= 128, max= 304, per=4.24%, avg=244.00, stdev=53.16, samples=20 00:25:57.315 iops : min= 32, max= 76, avg=61.00, stdev=13.29, samples=20 00:25:57.315 lat (msec) : 250=55.91%, 500=44.09% 00:25:57.315 cpu : usr=97.51%, sys=1.70%, ctx=155, majf=0, minf=31 00:25:57.315 IO depths : 1=1.6%, 2=5.3%, 4=17.1%, 8=65.0%, 16=11.0%, 32=0.0%, >=64=0.0% 00:25:57.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.315 complete : 0=0.0%, 4=91.8%, 8=2.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.315 issued rwts: total=626,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.315 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.315 filename1: (groupid=0, jobs=1): err= 0: pid=694351: Fri Jul 12 11:30:21 2024 00:25:57.315 read: IOPS=45, BW=183KiB/s (187kB/s)(1856KiB/10138msec) 00:25:57.315 slat (nsec): min=6740, max=68194, avg=36388.09, stdev=9972.25 00:25:57.315 clat (msec): min=176, max=608, avg=349.28, stdev=59.66 00:25:57.315 lat (msec): min=176, max=608, avg=349.32, stdev=59.66 00:25:57.315 clat percentiles (msec): 00:25:57.315 | 1.00th=[ 251], 5.00th=[ 268], 10.00th=[ 309], 20.00th=[ 330], 00:25:57.315 | 30.00th=[ 338], 40.00th=[ 342], 50.00th=[ 347], 60.00th=[ 347], 00:25:57.315 | 70.00th=[ 351], 80.00th=[ 355], 90.00th=[ 368], 95.00th=[ 435], 00:25:57.315 | 99.00th=[ 609], 99.50th=[ 609], 99.90th=[ 609], 99.95th=[ 609], 00:25:57.315 | 99.99th=[ 609] 00:25:57.315 bw ( KiB/s): min= 128, max= 256, per=3.27%, avg=188.63, stdev=59.29, samples=19 00:25:57.315 iops : min= 32, max= 64, avg=47.16, stdev=14.82, samples=19 00:25:57.315 lat (msec) : 250=0.43%, 500=96.12%, 750=3.45% 00:25:57.315 cpu : usr=97.75%, sys=1.53%, ctx=46, majf=0, minf=29 00:25:57.315 IO depths : 1=3.2%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.3%, 32=0.0%, >=64=0.0% 00:25:57.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.315 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.315 issued rwts: total=464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.315 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.315 filename1: (groupid=0, jobs=1): err= 0: pid=694352: Fri Jul 12 11:30:21 2024 00:25:57.315 read: IOPS=45, BW=183KiB/s (188kB/s)(1856KiB/10121msec) 00:25:57.315 slat (nsec): min=12284, max=97600, avg=36867.72, stdev=21173.92 00:25:57.315 clat (msec): min=268, max=591, avg=348.63, stdev=49.51 00:25:57.315 lat (msec): min=268, max=591, avg=348.67, stdev=49.51 00:25:57.315 clat percentiles (msec): 00:25:57.315 | 1.00th=[ 271], 5.00th=[ 309], 10.00th=[ 321], 20.00th=[ 330], 00:25:57.315 | 30.00th=[ 338], 40.00th=[ 342], 50.00th=[ 347], 60.00th=[ 347], 00:25:57.315 | 70.00th=[ 351], 80.00th=[ 351], 90.00th=[ 359], 95.00th=[ 368], 00:25:57.315 | 99.00th=[ 592], 99.50th=[ 592], 99.90th=[ 592], 99.95th=[ 592], 00:25:57.315 | 99.99th=[ 592] 00:25:57.315 bw ( KiB/s): min= 128, max= 256, per=3.27%, avg=188.63, stdev=64.13, samples=19 00:25:57.315 iops : min= 32, max= 64, avg=47.16, stdev=16.03, samples=19 00:25:57.315 lat (msec) : 500=96.55%, 750=3.45% 00:25:57.315 cpu : usr=98.53%, sys=1.07%, ctx=14, majf=0, minf=26 00:25:57.315 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:25:57.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.315 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.315 issued rwts: total=464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.315 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.315 filename1: (groupid=0, jobs=1): err= 0: pid=694353: Fri Jul 12 11:30:21 2024 00:25:57.315 read: IOPS=69, BW=278KiB/s (284kB/s)(2816KiB/10140msec) 00:25:57.315 slat (nsec): min=8011, max=64847, avg=11532.84, stdev=5941.15 00:25:57.315 clat (msec): min=178, max=317, avg=229.74, stdev=29.19 00:25:57.315 lat (msec): min=178, max=317, avg=229.75, stdev=29.19 00:25:57.315 clat percentiles (msec): 00:25:57.315 | 1.00th=[ 180], 5.00th=[ 182], 10.00th=[ 184], 20.00th=[ 194], 00:25:57.315 | 30.00th=[ 213], 40.00th=[ 236], 50.00th=[ 239], 60.00th=[ 243], 00:25:57.315 | 70.00th=[ 251], 80.00th=[ 259], 90.00th=[ 264], 95.00th=[ 266], 00:25:57.315 | 99.00th=[ 268], 99.50th=[ 268], 99.90th=[ 317], 99.95th=[ 317], 00:25:57.315 | 99.99th=[ 317] 00:25:57.315 bw ( KiB/s): min= 240, max= 368, per=4.78%, avg=275.20, stdev=40.74, samples=20 00:25:57.315 iops : min= 60, max= 92, avg=68.80, stdev=10.19, samples=20 00:25:57.315 lat (msec) : 250=69.60%, 500=30.40% 00:25:57.315 cpu : usr=98.24%, sys=1.20%, ctx=25, majf=0, minf=25 00:25:57.315 IO depths : 1=0.3%, 2=6.5%, 4=25.0%, 8=56.0%, 16=12.2%, 32=0.0%, >=64=0.0% 00:25:57.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.315 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.315 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.315 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.315 filename1: (groupid=0, jobs=1): err= 0: pid=694354: Fri Jul 12 11:30:21 2024 00:25:57.315 read: IOPS=45, BW=183KiB/s (188kB/s)(1856KiB/10127msec) 00:25:57.315 slat (nsec): min=9498, max=90826, avg=34835.23, stdev=16919.12 00:25:57.315 clat (msec): min=250, max=597, avg=348.91, stdev=53.65 00:25:57.315 lat (msec): min=250, max=597, avg=348.95, stdev=53.65 00:25:57.315 clat percentiles (msec): 00:25:57.315 | 1.00th=[ 262], 5.00th=[ 275], 10.00th=[ 321], 20.00th=[ 330], 00:25:57.316 | 30.00th=[ 338], 40.00th=[ 342], 50.00th=[ 347], 60.00th=[ 347], 00:25:57.316 | 70.00th=[ 351], 80.00th=[ 351], 90.00th=[ 359], 95.00th=[ 422], 00:25:57.316 | 99.00th=[ 600], 99.50th=[ 600], 99.90th=[ 600], 99.95th=[ 600], 00:25:57.316 | 99.99th=[ 600] 00:25:57.316 bw ( KiB/s): min= 128, max= 256, per=3.27%, avg=188.63, stdev=60.94, samples=19 00:25:57.316 iops : min= 32, max= 64, avg=47.16, stdev=15.24, samples=19 00:25:57.316 lat (msec) : 500=96.55%, 750=3.45% 00:25:57.316 cpu : usr=98.24%, sys=1.36%, ctx=17, majf=0, minf=27 00:25:57.316 IO depths : 1=4.5%, 2=10.8%, 4=25.0%, 8=51.7%, 16=8.0%, 32=0.0%, >=64=0.0% 00:25:57.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.316 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.316 issued rwts: total=464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.316 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.316 filename1: (groupid=0, jobs=1): err= 0: pid=694355: Fri Jul 12 11:30:21 2024 00:25:57.316 read: IOPS=69, BW=279KiB/s (285kB/s)(2816KiB/10110msec) 00:25:57.316 slat (nsec): min=7905, max=66689, avg=10768.61, stdev=4778.98 00:25:57.316 clat (msec): min=178, max=267, avg=229.65, stdev=30.15 00:25:57.316 lat (msec): min=178, max=267, avg=229.66, stdev=30.15 00:25:57.316 clat percentiles (msec): 00:25:57.316 | 1.00th=[ 180], 5.00th=[ 182], 10.00th=[ 184], 20.00th=[ 194], 00:25:57.316 | 30.00th=[ 197], 40.00th=[ 236], 50.00th=[ 241], 60.00th=[ 243], 00:25:57.316 | 70.00th=[ 253], 80.00th=[ 262], 90.00th=[ 264], 95.00th=[ 268], 00:25:57.316 | 99.00th=[ 268], 99.50th=[ 268], 99.90th=[ 268], 99.95th=[ 268], 00:25:57.316 | 99.99th=[ 268] 00:25:57.316 bw ( KiB/s): min= 256, max= 384, per=4.78%, avg=275.20, stdev=46.89, samples=20 00:25:57.316 iops : min= 64, max= 96, avg=68.80, stdev=11.72, samples=20 00:25:57.316 lat (msec) : 250=68.18%, 500=31.82% 00:25:57.316 cpu : usr=98.29%, sys=1.32%, ctx=15, majf=0, minf=18 00:25:57.316 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:25:57.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.316 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.316 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.316 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.316 filename2: (groupid=0, jobs=1): err= 0: pid=694356: Fri Jul 12 11:30:21 2024 00:25:57.316 read: IOPS=68, BW=274KiB/s (281kB/s)(2784KiB/10143msec) 00:25:57.316 slat (nsec): min=7905, max=99063, avg=34525.90, stdev=31327.04 00:25:57.316 clat (msec): min=160, max=351, avg=232.18, stdev=29.53 00:25:57.316 lat (msec): min=160, max=351, avg=232.22, stdev=29.53 00:25:57.316 clat percentiles (msec): 00:25:57.316 | 1.00th=[ 171], 5.00th=[ 184], 10.00th=[ 203], 20.00th=[ 213], 00:25:57.316 | 30.00th=[ 218], 40.00th=[ 228], 50.00th=[ 230], 60.00th=[ 236], 00:25:57.316 | 70.00th=[ 247], 80.00th=[ 251], 90.00th=[ 257], 95.00th=[ 275], 00:25:57.316 | 99.00th=[ 351], 99.50th=[ 351], 99.90th=[ 351], 99.95th=[ 351], 00:25:57.316 | 99.99th=[ 351] 00:25:57.316 bw ( KiB/s): min= 240, max= 336, per=4.71%, avg=272.00, stdev=26.47, samples=20 00:25:57.316 iops : min= 60, max= 84, avg=68.00, stdev= 6.62, samples=20 00:25:57.316 lat (msec) : 250=77.16%, 500=22.84% 00:25:57.316 cpu : usr=98.50%, sys=1.06%, ctx=19, majf=0, minf=21 00:25:57.316 IO depths : 1=0.1%, 2=1.4%, 4=9.9%, 8=76.0%, 16=12.5%, 32=0.0%, >=64=0.0% 00:25:57.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.316 complete : 0=0.0%, 4=89.8%, 8=4.8%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.316 issued rwts: total=696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.316 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.316 filename2: (groupid=0, jobs=1): err= 0: pid=694357: Fri Jul 12 11:30:21 2024 00:25:57.316 read: IOPS=45, BW=183KiB/s (188kB/s)(1856KiB/10123msec) 00:25:57.316 slat (nsec): min=8080, max=59765, avg=23880.11, stdev=10882.16 00:25:57.316 clat (msec): min=253, max=592, avg=348.80, stdev=56.36 00:25:57.316 lat (msec): min=253, max=592, avg=348.83, stdev=56.36 00:25:57.316 clat percentiles (msec): 00:25:57.316 | 1.00th=[ 255], 5.00th=[ 268], 10.00th=[ 313], 20.00th=[ 330], 00:25:57.316 | 30.00th=[ 338], 40.00th=[ 342], 50.00th=[ 347], 60.00th=[ 347], 00:25:57.316 | 70.00th=[ 351], 80.00th=[ 355], 90.00th=[ 368], 95.00th=[ 435], 00:25:57.316 | 99.00th=[ 592], 99.50th=[ 592], 99.90th=[ 592], 99.95th=[ 592], 00:25:57.316 | 99.99th=[ 592] 00:25:57.316 bw ( KiB/s): min= 128, max= 256, per=3.27%, avg=188.63, stdev=62.56, samples=19 00:25:57.316 iops : min= 32, max= 64, avg=47.16, stdev=15.64, samples=19 00:25:57.316 lat (msec) : 500=96.55%, 750=3.45% 00:25:57.316 cpu : usr=98.45%, sys=1.16%, ctx=14, majf=0, minf=20 00:25:57.316 IO depths : 1=3.4%, 2=9.7%, 4=25.0%, 8=52.8%, 16=9.1%, 32=0.0%, >=64=0.0% 00:25:57.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.316 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.316 issued rwts: total=464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.316 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.316 filename2: (groupid=0, jobs=1): err= 0: pid=694358: Fri Jul 12 11:30:21 2024 00:25:57.316 read: IOPS=45, BW=183KiB/s (188kB/s)(1856KiB/10128msec) 00:25:57.316 slat (nsec): min=9821, max=58656, avg=30212.86, stdev=9875.81 00:25:57.316 clat (msec): min=176, max=597, avg=348.95, stdev=54.77 00:25:57.316 lat (msec): min=176, max=597, avg=348.98, stdev=54.77 00:25:57.316 clat percentiles (msec): 00:25:57.316 | 1.00th=[ 257], 5.00th=[ 275], 10.00th=[ 321], 20.00th=[ 330], 00:25:57.316 | 30.00th=[ 338], 40.00th=[ 342], 50.00th=[ 347], 60.00th=[ 347], 00:25:57.316 | 70.00th=[ 351], 80.00th=[ 351], 90.00th=[ 359], 95.00th=[ 430], 00:25:57.316 | 99.00th=[ 600], 99.50th=[ 600], 99.90th=[ 600], 99.95th=[ 600], 00:25:57.316 | 99.99th=[ 600] 00:25:57.316 bw ( KiB/s): min= 128, max= 256, per=3.27%, avg=188.63, stdev=64.13, samples=19 00:25:57.316 iops : min= 32, max= 64, avg=47.16, stdev=16.03, samples=19 00:25:57.316 lat (msec) : 250=0.43%, 500=96.12%, 750=3.45% 00:25:57.316 cpu : usr=98.23%, sys=1.38%, ctx=11, majf=0, minf=22 00:25:57.316 IO depths : 1=4.7%, 2=11.0%, 4=25.0%, 8=51.5%, 16=7.8%, 32=0.0%, >=64=0.0% 00:25:57.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.316 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.316 issued rwts: total=464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.316 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.316 filename2: (groupid=0, jobs=1): err= 0: pid=694359: Fri Jul 12 11:30:21 2024 00:25:57.316 read: IOPS=47, BW=190KiB/s (194kB/s)(1920KiB/10131msec) 00:25:57.316 slat (nsec): min=7995, max=88469, avg=19758.75, stdev=13784.92 00:25:57.316 clat (msec): min=193, max=600, avg=337.50, stdev=66.01 00:25:57.316 lat (msec): min=193, max=600, avg=337.52, stdev=66.01 00:25:57.316 clat percentiles (msec): 00:25:57.316 | 1.00th=[ 197], 5.00th=[ 245], 10.00th=[ 262], 20.00th=[ 321], 00:25:57.316 | 30.00th=[ 330], 40.00th=[ 338], 50.00th=[ 342], 60.00th=[ 351], 00:25:57.316 | 70.00th=[ 351], 80.00th=[ 351], 90.00th=[ 359], 95.00th=[ 414], 00:25:57.316 | 99.00th=[ 600], 99.50th=[ 600], 99.90th=[ 600], 99.95th=[ 600], 00:25:57.316 | 99.99th=[ 600] 00:25:57.316 bw ( KiB/s): min= 128, max= 256, per=3.39%, avg=195.37, stdev=64.13, samples=19 00:25:57.316 iops : min= 32, max= 64, avg=48.84, stdev=16.03, samples=19 00:25:57.316 lat (msec) : 250=7.08%, 500=88.75%, 750=4.17% 00:25:57.316 cpu : usr=98.42%, sys=1.18%, ctx=15, majf=0, minf=21 00:25:57.316 IO depths : 1=4.4%, 2=10.6%, 4=25.0%, 8=51.9%, 16=8.1%, 32=0.0%, >=64=0.0% 00:25:57.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.317 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.317 issued rwts: total=480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.317 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.317 filename2: (groupid=0, jobs=1): err= 0: pid=694360: Fri Jul 12 11:30:21 2024 00:25:57.317 read: IOPS=67, BW=271KiB/s (277kB/s)(2744KiB/10141msec) 00:25:57.317 slat (nsec): min=8059, max=62767, avg=11161.49, stdev=4480.64 00:25:57.317 clat (msec): min=176, max=479, avg=236.18, stdev=51.48 00:25:57.317 lat (msec): min=176, max=479, avg=236.19, stdev=51.49 00:25:57.317 clat percentiles (msec): 00:25:57.317 | 1.00th=[ 178], 5.00th=[ 182], 10.00th=[ 184], 20.00th=[ 188], 00:25:57.317 | 30.00th=[ 197], 40.00th=[ 234], 50.00th=[ 243], 60.00th=[ 245], 00:25:57.317 | 70.00th=[ 259], 80.00th=[ 264], 90.00th=[ 268], 95.00th=[ 268], 00:25:57.317 | 99.00th=[ 481], 99.50th=[ 481], 99.90th=[ 481], 99.95th=[ 481], 00:25:57.317 | 99.99th=[ 481] 00:25:57.317 bw ( KiB/s): min= 128, max= 384, per=4.64%, avg=268.00, stdev=55.64, samples=20 00:25:57.317 iops : min= 32, max= 96, avg=67.00, stdev=13.91, samples=20 00:25:57.317 lat (msec) : 250=64.72%, 500=35.28% 00:25:57.317 cpu : usr=98.17%, sys=1.43%, ctx=14, majf=0, minf=19 00:25:57.317 IO depths : 1=6.1%, 2=12.4%, 4=25.1%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:25:57.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.317 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.317 issued rwts: total=686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.317 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.317 filename2: (groupid=0, jobs=1): err= 0: pid=694361: Fri Jul 12 11:30:21 2024 00:25:57.317 read: IOPS=66, BW=265KiB/s (271kB/s)(2688KiB/10141msec) 00:25:57.317 slat (nsec): min=8032, max=64869, avg=11500.00, stdev=5315.68 00:25:57.317 clat (msec): min=175, max=591, avg=240.18, stdev=51.30 00:25:57.317 lat (msec): min=175, max=591, avg=240.19, stdev=51.30 00:25:57.317 clat percentiles (msec): 00:25:57.317 | 1.00th=[ 178], 5.00th=[ 192], 10.00th=[ 201], 20.00th=[ 211], 00:25:57.317 | 30.00th=[ 218], 40.00th=[ 226], 50.00th=[ 232], 60.00th=[ 236], 00:25:57.317 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 266], 95.00th=[ 342], 00:25:57.317 | 99.00th=[ 481], 99.50th=[ 481], 99.90th=[ 592], 99.95th=[ 592], 00:25:57.317 | 99.99th=[ 592] 00:25:57.317 bw ( KiB/s): min= 112, max= 336, per=4.56%, avg=262.40, stdev=54.05, samples=20 00:25:57.317 iops : min= 28, max= 84, avg=65.60, stdev=13.51, samples=20 00:25:57.317 lat (msec) : 250=72.77%, 500=26.93%, 750=0.30% 00:25:57.317 cpu : usr=98.47%, sys=1.12%, ctx=14, majf=0, minf=26 00:25:57.317 IO depths : 1=0.1%, 2=0.4%, 4=6.7%, 8=80.1%, 16=12.6%, 32=0.0%, >=64=0.0% 00:25:57.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.317 complete : 0=0.0%, 4=88.8%, 8=6.1%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.317 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.317 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.317 filename2: (groupid=0, jobs=1): err= 0: pid=694362: Fri Jul 12 11:30:21 2024 00:25:57.317 read: IOPS=45, BW=183KiB/s (188kB/s)(1856KiB/10123msec) 00:25:57.317 slat (nsec): min=10589, max=54359, avg=25106.62, stdev=10003.12 00:25:57.317 clat (msec): min=177, max=593, avg=348.81, stdev=53.91 00:25:57.317 lat (msec): min=177, max=593, avg=348.84, stdev=53.91 00:25:57.317 clat percentiles (msec): 00:25:57.317 | 1.00th=[ 262], 5.00th=[ 275], 10.00th=[ 321], 20.00th=[ 330], 00:25:57.317 | 30.00th=[ 338], 40.00th=[ 342], 50.00th=[ 347], 60.00th=[ 347], 00:25:57.317 | 70.00th=[ 351], 80.00th=[ 351], 90.00th=[ 359], 95.00th=[ 435], 00:25:57.317 | 99.00th=[ 592], 99.50th=[ 592], 99.90th=[ 592], 99.95th=[ 592], 00:25:57.317 | 99.99th=[ 592] 00:25:57.317 bw ( KiB/s): min= 128, max= 256, per=3.27%, avg=188.63, stdev=62.56, samples=19 00:25:57.317 iops : min= 32, max= 64, avg=47.16, stdev=15.64, samples=19 00:25:57.317 lat (msec) : 250=0.43%, 500=96.12%, 750=3.45% 00:25:57.317 cpu : usr=98.37%, sys=1.23%, ctx=15, majf=0, minf=28 00:25:57.317 IO depths : 1=4.7%, 2=11.0%, 4=25.0%, 8=51.5%, 16=7.8%, 32=0.0%, >=64=0.0% 00:25:57.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.317 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.317 issued rwts: total=464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.317 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.317 filename2: (groupid=0, jobs=1): err= 0: pid=694363: Fri Jul 12 11:30:21 2024 00:25:57.317 read: IOPS=45, BW=183KiB/s (188kB/s)(1856KiB/10122msec) 00:25:57.317 slat (usec): min=7, max=101, avg=25.54, stdev=25.67 00:25:57.317 clat (msec): min=177, max=816, avg=348.77, stdev=83.91 00:25:57.317 lat (msec): min=177, max=816, avg=348.79, stdev=83.91 00:25:57.317 clat percentiles (msec): 00:25:57.317 | 1.00th=[ 178], 5.00th=[ 184], 10.00th=[ 317], 20.00th=[ 326], 00:25:57.317 | 30.00th=[ 338], 40.00th=[ 347], 50.00th=[ 347], 60.00th=[ 351], 00:25:57.317 | 70.00th=[ 351], 80.00th=[ 359], 90.00th=[ 368], 95.00th=[ 510], 00:25:57.317 | 99.00th=[ 642], 99.50th=[ 642], 99.90th=[ 818], 99.95th=[ 818], 00:25:57.317 | 99.99th=[ 818] 00:25:57.317 bw ( KiB/s): min= 128, max= 256, per=3.27%, avg=188.63, stdev=57.58, samples=19 00:25:57.317 iops : min= 32, max= 64, avg=47.16, stdev=14.40, samples=19 00:25:57.317 lat (msec) : 250=8.62%, 500=84.91%, 750=6.03%, 1000=0.43% 00:25:57.317 cpu : usr=98.67%, sys=0.92%, ctx=14, majf=0, minf=28 00:25:57.317 IO depths : 1=3.7%, 2=9.9%, 4=25.0%, 8=52.6%, 16=8.8%, 32=0.0%, >=64=0.0% 00:25:57.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.317 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.317 issued rwts: total=464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.317 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:57.317 00:25:57.317 Run status group 0 (all jobs): 00:25:57.317 READ: bw=5750KiB/s (5888kB/s), 183KiB/s-320KiB/s (187kB/s-328kB/s), io=57.1MiB (59.9MB), run=10110-10172msec 00:25:57.317 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:25:57.317 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:25:57.317 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:57.317 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:57.317 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:25:57.317 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:57.317 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.317 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.317 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.317 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:57.317 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.317 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.317 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.317 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:57.317 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:57.317 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:25:57.317 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:57.317 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.317 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.317 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.317 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:57.317 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.317 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.318 bdev_null0 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.318 [2024-07-12 11:30:22.117693] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.318 bdev_null1 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:57.318 { 00:25:57.318 "params": { 00:25:57.318 "name": "Nvme$subsystem", 00:25:57.318 "trtype": "$TEST_TRANSPORT", 00:25:57.318 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:57.318 "adrfam": "ipv4", 00:25:57.318 "trsvcid": "$NVMF_PORT", 00:25:57.318 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:57.318 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:57.318 "hdgst": ${hdgst:-false}, 00:25:57.318 "ddgst": ${ddgst:-false} 00:25:57.318 }, 00:25:57.318 "method": "bdev_nvme_attach_controller" 00:25:57.318 } 00:25:57.318 EOF 00:25:57.318 )") 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:57.318 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:25:57.319 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:25:57.319 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:57.319 11:30:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:57.319 11:30:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:57.319 { 00:25:57.319 "params": { 00:25:57.319 "name": "Nvme$subsystem", 00:25:57.319 "trtype": "$TEST_TRANSPORT", 00:25:57.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:57.319 "adrfam": "ipv4", 00:25:57.319 "trsvcid": "$NVMF_PORT", 00:25:57.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:57.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:57.319 "hdgst": ${hdgst:-false}, 00:25:57.319 "ddgst": ${ddgst:-false} 00:25:57.319 }, 00:25:57.319 "method": "bdev_nvme_attach_controller" 00:25:57.319 } 00:25:57.319 EOF 00:25:57.319 )") 00:25:57.319 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:25:57.319 11:30:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:57.319 11:30:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:57.319 11:30:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:25:57.319 11:30:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:25:57.319 11:30:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:57.319 "params": { 00:25:57.319 "name": "Nvme0", 00:25:57.319 "trtype": "tcp", 00:25:57.319 "traddr": "10.0.0.2", 00:25:57.319 "adrfam": "ipv4", 00:25:57.319 "trsvcid": "4420", 00:25:57.319 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:57.319 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:57.319 "hdgst": false, 00:25:57.319 "ddgst": false 00:25:57.319 }, 00:25:57.319 "method": "bdev_nvme_attach_controller" 00:25:57.319 },{ 00:25:57.319 "params": { 00:25:57.319 "name": "Nvme1", 00:25:57.319 "trtype": "tcp", 00:25:57.319 "traddr": "10.0.0.2", 00:25:57.319 "adrfam": "ipv4", 00:25:57.319 "trsvcid": "4420", 00:25:57.319 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:57.319 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:57.319 "hdgst": false, 00:25:57.319 "ddgst": false 00:25:57.319 }, 00:25:57.319 "method": "bdev_nvme_attach_controller" 00:25:57.319 }' 00:25:57.319 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:57.319 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:57.319 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:57.319 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:57.319 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:57.319 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:57.319 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:57.319 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:57.319 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:57.319 11:30:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:57.319 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:25:57.319 ... 00:25:57.319 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:25:57.319 ... 00:25:57.319 fio-3.35 00:25:57.319 Starting 4 threads 00:25:57.319 EAL: No free 2048 kB hugepages reported on node 1 00:26:02.579 00:26:02.579 filename0: (groupid=0, jobs=1): err= 0: pid=695577: Fri Jul 12 11:30:28 2024 00:26:02.579 read: IOPS=1925, BW=15.0MiB/s (15.8MB/s)(75.3MiB/5004msec) 00:26:02.579 slat (nsec): min=4614, max=78751, avg=14118.21, stdev=6501.19 00:26:02.579 clat (usec): min=790, max=8999, avg=4101.87, stdev=452.88 00:26:02.579 lat (usec): min=802, max=9027, avg=4115.99, stdev=453.26 00:26:02.579 clat percentiles (usec): 00:26:02.579 | 1.00th=[ 2606], 5.00th=[ 3523], 10.00th=[ 3752], 20.00th=[ 3982], 00:26:02.579 | 30.00th=[ 4047], 40.00th=[ 4080], 50.00th=[ 4113], 60.00th=[ 4146], 00:26:02.579 | 70.00th=[ 4228], 80.00th=[ 4228], 90.00th=[ 4359], 95.00th=[ 4490], 00:26:02.579 | 99.00th=[ 5735], 99.50th=[ 6456], 99.90th=[ 7767], 99.95th=[ 8094], 00:26:02.579 | 99.99th=[ 8979] 00:26:02.579 bw ( KiB/s): min=15104, max=15872, per=25.26%, avg=15400.00, stdev=210.28, samples=10 00:26:02.579 iops : min= 1888, max= 1984, avg=1925.00, stdev=26.28, samples=10 00:26:02.579 lat (usec) : 1000=0.05% 00:26:02.579 lat (msec) : 2=0.32%, 4=23.42%, 10=76.21% 00:26:02.579 cpu : usr=93.30%, sys=6.18%, ctx=8, majf=0, minf=0 00:26:02.579 IO depths : 1=1.3%, 2=20.1%, 4=54.0%, 8=24.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:02.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.579 complete : 0=0.0%, 4=90.6%, 8=9.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.579 issued rwts: total=9633,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.579 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:02.579 filename0: (groupid=0, jobs=1): err= 0: pid=695578: Fri Jul 12 11:30:28 2024 00:26:02.579 read: IOPS=1881, BW=14.7MiB/s (15.4MB/s)(73.5MiB/5001msec) 00:26:02.579 slat (nsec): min=4461, max=61737, avg=15049.41, stdev=7019.47 00:26:02.579 clat (usec): min=743, max=7986, avg=4194.94, stdev=621.68 00:26:02.579 lat (usec): min=755, max=8008, avg=4209.98, stdev=621.31 00:26:02.579 clat percentiles (usec): 00:26:02.579 | 1.00th=[ 1876], 5.00th=[ 3621], 10.00th=[ 3916], 20.00th=[ 4015], 00:26:02.579 | 30.00th=[ 4080], 40.00th=[ 4113], 50.00th=[ 4146], 60.00th=[ 4178], 00:26:02.579 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4490], 95.00th=[ 5211], 00:26:02.579 | 99.00th=[ 6915], 99.50th=[ 7308], 99.90th=[ 7504], 99.95th=[ 7570], 00:26:02.579 | 99.99th=[ 7963] 00:26:02.579 bw ( KiB/s): min=14656, max=15392, per=24.63%, avg=15016.89, stdev=215.27, samples=9 00:26:02.579 iops : min= 1832, max= 1924, avg=1877.11, stdev=26.91, samples=9 00:26:02.579 lat (usec) : 750=0.01%, 1000=0.19% 00:26:02.579 lat (msec) : 2=0.95%, 4=15.25%, 10=83.60% 00:26:02.579 cpu : usr=93.28%, sys=6.24%, ctx=7, majf=0, minf=9 00:26:02.579 IO depths : 1=0.8%, 2=19.2%, 4=54.2%, 8=25.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:02.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.579 complete : 0=0.0%, 4=91.2%, 8=8.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.579 issued rwts: total=9407,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.579 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:02.579 filename1: (groupid=0, jobs=1): err= 0: pid=695579: Fri Jul 12 11:30:28 2024 00:26:02.579 read: IOPS=1911, BW=14.9MiB/s (15.7MB/s)(74.7MiB/5003msec) 00:26:02.579 slat (nsec): min=4540, max=82932, avg=12484.18, stdev=5636.27 00:26:02.579 clat (usec): min=880, max=7355, avg=4144.25, stdev=445.13 00:26:02.579 lat (usec): min=898, max=7371, avg=4156.73, stdev=445.22 00:26:02.579 clat percentiles (usec): 00:26:02.579 | 1.00th=[ 2933], 5.00th=[ 3556], 10.00th=[ 3785], 20.00th=[ 3982], 00:26:02.579 | 30.00th=[ 4047], 40.00th=[ 4113], 50.00th=[ 4146], 60.00th=[ 4178], 00:26:02.579 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4621], 00:26:02.579 | 99.00th=[ 6128], 99.50th=[ 6652], 99.90th=[ 7242], 99.95th=[ 7308], 00:26:02.579 | 99.99th=[ 7373] 00:26:02.579 bw ( KiB/s): min=14944, max=15536, per=25.08%, avg=15287.80, stdev=224.82, samples=10 00:26:02.579 iops : min= 1868, max= 1942, avg=1910.90, stdev=28.16, samples=10 00:26:02.579 lat (usec) : 1000=0.01% 00:26:02.579 lat (msec) : 2=0.24%, 4=20.22%, 10=79.53% 00:26:02.579 cpu : usr=93.06%, sys=6.46%, ctx=14, majf=0, minf=0 00:26:02.579 IO depths : 1=0.7%, 2=11.0%, 4=61.5%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:02.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.579 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.579 issued rwts: total=9561,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.579 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:02.579 filename1: (groupid=0, jobs=1): err= 0: pid=695580: Fri Jul 12 11:30:28 2024 00:26:02.579 read: IOPS=1905, BW=14.9MiB/s (15.6MB/s)(74.4MiB/5001msec) 00:26:02.579 slat (usec): min=4, max=103, avg=15.48, stdev= 6.61 00:26:02.579 clat (usec): min=683, max=7649, avg=4138.13, stdev=566.25 00:26:02.579 lat (usec): min=695, max=7662, avg=4153.62, stdev=566.27 00:26:02.579 clat percentiles (usec): 00:26:02.579 | 1.00th=[ 1876], 5.00th=[ 3556], 10.00th=[ 3818], 20.00th=[ 4015], 00:26:02.579 | 30.00th=[ 4047], 40.00th=[ 4080], 50.00th=[ 4113], 60.00th=[ 4146], 00:26:02.579 | 70.00th=[ 4178], 80.00th=[ 4228], 90.00th=[ 4359], 95.00th=[ 4817], 00:26:02.579 | 99.00th=[ 6718], 99.50th=[ 6980], 99.90th=[ 7439], 99.95th=[ 7504], 00:26:02.579 | 99.99th=[ 7635] 00:26:02.579 bw ( KiB/s): min=14816, max=15504, per=24.99%, avg=15233.56, stdev=200.53, samples=9 00:26:02.579 iops : min= 1852, max= 1938, avg=1904.11, stdev=25.06, samples=9 00:26:02.579 lat (usec) : 750=0.02%, 1000=0.10% 00:26:02.579 lat (msec) : 2=0.94%, 4=18.98%, 10=79.95% 00:26:02.579 cpu : usr=91.74%, sys=6.80%, ctx=56, majf=0, minf=9 00:26:02.579 IO depths : 1=0.7%, 2=20.8%, 4=53.4%, 8=25.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:02.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.579 complete : 0=0.0%, 4=90.6%, 8=9.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.579 issued rwts: total=9527,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.579 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:02.579 00:26:02.579 Run status group 0 (all jobs): 00:26:02.579 READ: bw=59.5MiB/s (62.4MB/s), 14.7MiB/s-15.0MiB/s (15.4MB/s-15.8MB/s), io=298MiB (312MB), run=5001-5004msec 00:26:02.579 11:30:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:02.579 11:30:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:02.579 11:30:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:02.580 11:30:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:02.580 11:30:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:02.580 11:30:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:02.580 11:30:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.580 11:30:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:02.580 11:30:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.580 11:30:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:02.580 11:30:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.580 11:30:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:02.580 11:30:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.580 11:30:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:02.580 11:30:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:02.580 11:30:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:02.580 11:30:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:02.580 11:30:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.580 11:30:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:02.580 11:30:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.580 11:30:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:02.580 11:30:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.580 11:30:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:02.580 11:30:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.580 00:26:02.580 real 0m24.317s 00:26:02.580 user 4m36.739s 00:26:02.580 sys 0m6.196s 00:26:02.580 11:30:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:02.580 11:30:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:02.580 ************************************ 00:26:02.580 END TEST fio_dif_rand_params 00:26:02.580 ************************************ 00:26:02.580 11:30:28 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:02.580 11:30:28 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:02.580 11:30:28 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:02.580 11:30:28 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:02.580 11:30:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:02.580 ************************************ 00:26:02.580 START TEST fio_dif_digest 00:26:02.580 ************************************ 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:02.580 bdev_null0 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:02.580 [2024-07-12 11:30:28.549788] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:02.580 { 00:26:02.580 "params": { 00:26:02.580 "name": "Nvme$subsystem", 00:26:02.580 "trtype": "$TEST_TRANSPORT", 00:26:02.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:02.580 "adrfam": "ipv4", 00:26:02.580 "trsvcid": "$NVMF_PORT", 00:26:02.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:02.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:02.580 "hdgst": ${hdgst:-false}, 00:26:02.580 "ddgst": ${ddgst:-false} 00:26:02.580 }, 00:26:02.580 "method": "bdev_nvme_attach_controller" 00:26:02.580 } 00:26:02.580 EOF 00:26:02.580 )") 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:02.580 "params": { 00:26:02.580 "name": "Nvme0", 00:26:02.580 "trtype": "tcp", 00:26:02.580 "traddr": "10.0.0.2", 00:26:02.580 "adrfam": "ipv4", 00:26:02.580 "trsvcid": "4420", 00:26:02.580 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:02.580 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:02.580 "hdgst": true, 00:26:02.580 "ddgst": true 00:26:02.580 }, 00:26:02.580 "method": "bdev_nvme_attach_controller" 00:26:02.580 }' 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:02.580 11:30:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:02.839 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:02.839 ... 00:26:02.839 fio-3.35 00:26:02.839 Starting 3 threads 00:26:02.839 EAL: No free 2048 kB hugepages reported on node 1 00:26:15.074 00:26:15.074 filename0: (groupid=0, jobs=1): err= 0: pid=696418: Fri Jul 12 11:30:39 2024 00:26:15.074 read: IOPS=198, BW=24.8MiB/s (26.0MB/s)(250MiB/10046msec) 00:26:15.074 slat (nsec): min=7425, max=94297, avg=17598.13, stdev=6191.58 00:26:15.074 clat (usec): min=9825, max=51095, avg=15055.33, stdev=1495.55 00:26:15.074 lat (usec): min=9837, max=51108, avg=15072.93, stdev=1495.52 00:26:15.074 clat percentiles (usec): 00:26:15.074 | 1.00th=[12649], 5.00th=[13435], 10.00th=[13829], 20.00th=[14222], 00:26:15.074 | 30.00th=[14615], 40.00th=[14746], 50.00th=[15008], 60.00th=[15270], 00:26:15.074 | 70.00th=[15533], 80.00th=[15795], 90.00th=[16188], 95.00th=[16712], 00:26:15.074 | 99.00th=[17433], 99.50th=[17695], 99.90th=[49021], 99.95th=[51119], 00:26:15.074 | 99.99th=[51119] 00:26:15.074 bw ( KiB/s): min=25088, max=26112, per=32.69%, avg=25523.20, stdev=322.75, samples=20 00:26:15.074 iops : min= 196, max= 204, avg=199.40, stdev= 2.52, samples=20 00:26:15.074 lat (msec) : 10=0.25%, 20=99.65%, 50=0.05%, 100=0.05% 00:26:15.074 cpu : usr=91.41%, sys=7.10%, ctx=268, majf=0, minf=180 00:26:15.074 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:15.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.074 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.074 issued rwts: total=1996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.074 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:15.074 filename0: (groupid=0, jobs=1): err= 0: pid=696419: Fri Jul 12 11:30:39 2024 00:26:15.074 read: IOPS=202, BW=25.3MiB/s (26.6MB/s)(255MiB/10048msec) 00:26:15.074 slat (nsec): min=7110, max=40488, avg=14954.14, stdev=4287.49 00:26:15.074 clat (usec): min=11063, max=55791, avg=14764.59, stdev=2233.86 00:26:15.074 lat (usec): min=11076, max=55814, avg=14779.54, stdev=2233.85 00:26:15.074 clat percentiles (usec): 00:26:15.074 | 1.00th=[12256], 5.00th=[13042], 10.00th=[13435], 20.00th=[13829], 00:26:15.074 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14615], 60.00th=[14877], 00:26:15.074 | 70.00th=[15139], 80.00th=[15533], 90.00th=[16057], 95.00th=[16581], 00:26:15.074 | 99.00th=[17433], 99.50th=[18220], 99.90th=[55837], 99.95th=[55837], 00:26:15.074 | 99.99th=[55837] 00:26:15.074 bw ( KiB/s): min=23552, max=27136, per=33.35%, avg=26035.20, stdev=792.75, samples=20 00:26:15.074 iops : min= 184, max= 212, avg=203.40, stdev= 6.19, samples=20 00:26:15.074 lat (msec) : 20=99.61%, 50=0.20%, 100=0.20% 00:26:15.074 cpu : usr=92.89%, sys=6.64%, ctx=20, majf=0, minf=162 00:26:15.074 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:15.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.074 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.074 issued rwts: total=2036,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.074 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:15.074 filename0: (groupid=0, jobs=1): err= 0: pid=696420: Fri Jul 12 11:30:39 2024 00:26:15.075 read: IOPS=208, BW=26.1MiB/s (27.4MB/s)(262MiB/10048msec) 00:26:15.075 slat (nsec): min=7427, max=44993, avg=14586.31, stdev=4068.75 00:26:15.075 clat (usec): min=9675, max=54694, avg=14337.16, stdev=1542.61 00:26:15.075 lat (usec): min=9693, max=54705, avg=14351.75, stdev=1542.57 00:26:15.075 clat percentiles (usec): 00:26:15.075 | 1.00th=[11863], 5.00th=[12780], 10.00th=[13173], 20.00th=[13566], 00:26:15.075 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14353], 60.00th=[14484], 00:26:15.075 | 70.00th=[14746], 80.00th=[15008], 90.00th=[15401], 95.00th=[15795], 00:26:15.075 | 99.00th=[16712], 99.50th=[17171], 99.90th=[17957], 99.95th=[51643], 00:26:15.075 | 99.99th=[54789] 00:26:15.075 bw ( KiB/s): min=26368, max=27446, per=34.33%, avg=26805.90, stdev=348.51, samples=20 00:26:15.075 iops : min= 206, max= 214, avg=209.40, stdev= 2.68, samples=20 00:26:15.075 lat (msec) : 10=0.38%, 20=99.52%, 100=0.10% 00:26:15.075 cpu : usr=93.16%, sys=6.32%, ctx=18, majf=0, minf=111 00:26:15.075 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:15.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.075 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.075 issued rwts: total=2097,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.075 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:15.075 00:26:15.075 Run status group 0 (all jobs): 00:26:15.075 READ: bw=76.2MiB/s (79.9MB/s), 24.8MiB/s-26.1MiB/s (26.0MB/s-27.4MB/s), io=766MiB (803MB), run=10046-10048msec 00:26:15.075 11:30:39 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:15.075 11:30:39 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:26:15.075 11:30:39 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:26:15.075 11:30:39 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:15.075 11:30:39 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:26:15.075 11:30:39 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:15.075 11:30:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.075 11:30:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:15.075 11:30:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.075 11:30:39 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:15.075 11:30:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.075 11:30:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:15.075 11:30:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.075 00:26:15.075 real 0m11.281s 00:26:15.075 user 0m29.085s 00:26:15.075 sys 0m2.311s 00:26:15.075 11:30:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:15.075 11:30:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:15.075 ************************************ 00:26:15.075 END TEST fio_dif_digest 00:26:15.075 ************************************ 00:26:15.075 11:30:39 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:15.075 11:30:39 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:15.075 11:30:39 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:26:15.075 11:30:39 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:15.075 11:30:39 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:26:15.075 11:30:39 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:15.075 11:30:39 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:26:15.075 11:30:39 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:15.075 11:30:39 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:15.075 rmmod nvme_tcp 00:26:15.075 rmmod nvme_fabrics 00:26:15.075 rmmod nvme_keyring 00:26:15.075 11:30:39 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:15.075 11:30:39 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:26:15.075 11:30:39 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:26:15.075 11:30:39 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 689968 ']' 00:26:15.075 11:30:39 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 689968 00:26:15.075 11:30:39 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 689968 ']' 00:26:15.075 11:30:39 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 689968 00:26:15.075 11:30:39 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:26:15.075 11:30:39 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:15.075 11:30:39 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 689968 00:26:15.075 11:30:39 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:15.075 11:30:39 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:15.075 11:30:39 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 689968' 00:26:15.075 killing process with pid 689968 00:26:15.075 11:30:39 nvmf_dif -- common/autotest_common.sh@967 -- # kill 689968 00:26:15.075 11:30:39 nvmf_dif -- common/autotest_common.sh@972 -- # wait 689968 00:26:15.075 11:30:40 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:26:15.075 11:30:40 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:15.334 Waiting for block devices as requested 00:26:15.334 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:26:15.334 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:15.592 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:15.592 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:15.851 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:15.851 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:15.851 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:15.851 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:16.108 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:16.108 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:16.108 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:16.108 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:16.367 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:16.367 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:16.367 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:16.367 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:16.624 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:16.624 11:30:42 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:16.624 11:30:42 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:16.624 11:30:42 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:16.624 11:30:42 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:16.624 11:30:42 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:16.624 11:30:42 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:16.624 11:30:42 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:19.159 11:30:44 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:19.159 00:26:19.159 real 1m7.104s 00:26:19.159 user 6m33.850s 00:26:19.159 sys 0m17.679s 00:26:19.159 11:30:44 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:19.159 11:30:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:19.159 ************************************ 00:26:19.159 END TEST nvmf_dif 00:26:19.159 ************************************ 00:26:19.159 11:30:44 -- common/autotest_common.sh@1142 -- # return 0 00:26:19.159 11:30:44 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:19.159 11:30:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:19.159 11:30:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:19.159 11:30:44 -- common/autotest_common.sh@10 -- # set +x 00:26:19.159 ************************************ 00:26:19.159 START TEST nvmf_abort_qd_sizes 00:26:19.159 ************************************ 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:19.159 * Looking for test storage... 00:26:19.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:26:19.159 11:30:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:21.064 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:21.064 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:21.064 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:21.064 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:21.064 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:21.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:21.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:26:21.065 00:26:21.065 --- 10.0.0.2 ping statistics --- 00:26:21.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:21.065 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:26:21.065 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:21.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:21.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:26:21.065 00:26:21.065 --- 10.0.0.1 ping statistics --- 00:26:21.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:21.065 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:26:21.065 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:21.065 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:26:21.065 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:26:21.065 11:30:46 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:22.001 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:22.001 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:22.001 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:22.001 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:22.001 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:22.259 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:22.259 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:22.259 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:22.259 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:22.259 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:22.259 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:22.259 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:22.259 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:22.259 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:22.259 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:22.259 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:23.197 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:26:23.197 11:30:49 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:23.197 11:30:49 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:23.197 11:30:49 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:23.197 11:30:49 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:23.197 11:30:49 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:23.197 11:30:49 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:23.197 11:30:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:26:23.197 11:30:49 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:23.197 11:30:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:23.197 11:30:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:23.197 11:30:49 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=701125 00:26:23.197 11:30:49 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:23.197 11:30:49 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 701125 00:26:23.197 11:30:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 701125 ']' 00:26:23.197 11:30:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:23.197 11:30:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:23.197 11:30:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:23.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:23.197 11:30:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:23.197 11:30:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:23.197 [2024-07-12 11:30:49.301189] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:26:23.197 [2024-07-12 11:30:49.301292] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:23.455 EAL: No free 2048 kB hugepages reported on node 1 00:26:23.455 [2024-07-12 11:30:49.366730] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:23.455 [2024-07-12 11:30:49.469030] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:23.455 [2024-07-12 11:30:49.469083] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:23.455 [2024-07-12 11:30:49.469107] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:23.455 [2024-07-12 11:30:49.469118] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:23.455 [2024-07-12 11:30:49.469128] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:23.455 [2024-07-12 11:30:49.469272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:23.455 [2024-07-12 11:30:49.469335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:23.455 [2024-07-12 11:30:49.469445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.455 [2024-07-12 11:30:49.469443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:23.712 11:30:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:23.712 11:30:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:26:23.712 11:30:49 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:23.712 11:30:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:23.712 11:30:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:23.712 11:30:49 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:23.712 11:30:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:23.712 11:30:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:26:23.712 11:30:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:26:23.712 11:30:49 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:26:23.712 11:30:49 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:26:23.712 11:30:49 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:26:23.712 11:30:49 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:26:23.712 11:30:49 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:26:23.712 11:30:49 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:26:23.712 11:30:49 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:26:23.712 11:30:49 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:26:23.712 11:30:49 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:26:23.712 11:30:49 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:26:23.712 11:30:49 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:26:23.712 11:30:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:26:23.712 11:30:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:26:23.712 11:30:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:26:23.712 11:30:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:23.712 11:30:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:23.712 11:30:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:23.712 ************************************ 00:26:23.712 START TEST spdk_target_abort 00:26:23.712 ************************************ 00:26:23.712 11:30:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:26:23.712 11:30:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:23.712 11:30:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:26:23.712 11:30:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.712 11:30:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:26.986 spdk_targetn1 00:26:26.986 11:30:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.986 11:30:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:26.986 11:30:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.986 11:30:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:26.986 [2024-07-12 11:30:52.468640] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:26.987 11:30:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.987 11:30:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:26:26.987 11:30:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.987 11:30:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:26.987 11:30:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.987 11:30:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:26:26.987 11:30:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.987 11:30:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:26.987 11:30:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.987 11:30:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:26:26.987 11:30:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.987 11:30:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:26.987 [2024-07-12 11:30:52.500925] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:26.987 11:30:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.987 11:30:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:26:26.987 11:30:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:26.987 11:30:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:26.987 11:30:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:26:26.987 11:30:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:26.987 11:30:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:26:26.987 11:30:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:26.987 11:30:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:26.987 11:30:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:26.987 11:30:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:26.987 11:30:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:26.987 11:30:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:26.987 11:30:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:26.987 11:30:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:26.987 11:30:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:26:26.987 11:30:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:26.987 11:30:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:26.987 11:30:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:26.987 11:30:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:26.987 11:30:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:26.987 11:30:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:26.987 EAL: No free 2048 kB hugepages reported on node 1 00:26:30.259 Initializing NVMe Controllers 00:26:30.259 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:30.259 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:30.259 Initialization complete. Launching workers. 00:26:30.259 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11445, failed: 0 00:26:30.259 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1261, failed to submit 10184 00:26:30.259 success 747, unsuccess 514, failed 0 00:26:30.259 11:30:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:30.259 11:30:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:30.259 EAL: No free 2048 kB hugepages reported on node 1 00:26:33.529 Initializing NVMe Controllers 00:26:33.529 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:33.529 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:33.529 Initialization complete. Launching workers. 00:26:33.529 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8444, failed: 0 00:26:33.529 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1262, failed to submit 7182 00:26:33.529 success 344, unsuccess 918, failed 0 00:26:33.529 11:30:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:33.530 11:30:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:33.530 EAL: No free 2048 kB hugepages reported on node 1 00:26:36.085 Initializing NVMe Controllers 00:26:36.085 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:36.085 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:36.085 Initialization complete. Launching workers. 00:26:36.085 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31199, failed: 0 00:26:36.085 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2591, failed to submit 28608 00:26:36.085 success 535, unsuccess 2056, failed 0 00:26:36.085 11:31:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:26:36.085 11:31:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.085 11:31:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:36.085 11:31:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.086 11:31:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:26:36.086 11:31:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.086 11:31:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:37.458 11:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.458 11:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 701125 00:26:37.458 11:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 701125 ']' 00:26:37.458 11:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 701125 00:26:37.458 11:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:26:37.458 11:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:37.458 11:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 701125 00:26:37.458 11:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:37.458 11:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:37.458 11:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 701125' 00:26:37.458 killing process with pid 701125 00:26:37.458 11:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 701125 00:26:37.458 11:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 701125 00:26:37.716 00:26:37.716 real 0m14.151s 00:26:37.716 user 0m53.284s 00:26:37.716 sys 0m2.647s 00:26:37.716 11:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:37.716 11:31:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:37.716 ************************************ 00:26:37.716 END TEST spdk_target_abort 00:26:37.716 ************************************ 00:26:37.716 11:31:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:26:37.716 11:31:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:26:37.716 11:31:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:37.716 11:31:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:37.716 11:31:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:37.716 ************************************ 00:26:37.716 START TEST kernel_target_abort 00:26:37.716 ************************************ 00:26:37.716 11:31:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:26:37.716 11:31:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:26:37.716 11:31:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:26:37.716 11:31:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:37.716 11:31:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:37.716 11:31:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.716 11:31:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.716 11:31:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:37.716 11:31:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.716 11:31:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:37.716 11:31:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:37.716 11:31:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:37.716 11:31:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:37.716 11:31:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:37.716 11:31:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:37.716 11:31:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:37.716 11:31:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:37.716 11:31:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:37.716 11:31:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:26:37.716 11:31:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:37.716 11:31:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:37.716 11:31:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:37.717 11:31:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:39.091 Waiting for block devices as requested 00:26:39.091 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:26:39.091 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:39.091 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:39.350 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:39.350 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:39.350 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:39.350 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:39.608 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:39.608 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:39.608 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:39.608 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:39.866 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:39.866 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:39.866 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:40.124 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:40.124 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:40.124 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:40.384 No valid GPT data, bailing 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:26:40.384 00:26:40.384 Discovery Log Number of Records 2, Generation counter 2 00:26:40.384 =====Discovery Log Entry 0====== 00:26:40.384 trtype: tcp 00:26:40.384 adrfam: ipv4 00:26:40.384 subtype: current discovery subsystem 00:26:40.384 treq: not specified, sq flow control disable supported 00:26:40.384 portid: 1 00:26:40.384 trsvcid: 4420 00:26:40.384 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:40.384 traddr: 10.0.0.1 00:26:40.384 eflags: none 00:26:40.384 sectype: none 00:26:40.384 =====Discovery Log Entry 1====== 00:26:40.384 trtype: tcp 00:26:40.384 adrfam: ipv4 00:26:40.384 subtype: nvme subsystem 00:26:40.384 treq: not specified, sq flow control disable supported 00:26:40.384 portid: 1 00:26:40.384 trsvcid: 4420 00:26:40.384 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:40.384 traddr: 10.0.0.1 00:26:40.384 eflags: none 00:26:40.384 sectype: none 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:40.384 11:31:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:40.384 EAL: No free 2048 kB hugepages reported on node 1 00:26:43.665 Initializing NVMe Controllers 00:26:43.665 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:43.665 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:43.665 Initialization complete. Launching workers. 00:26:43.665 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 55106, failed: 0 00:26:43.665 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 55106, failed to submit 0 00:26:43.665 success 0, unsuccess 55106, failed 0 00:26:43.665 11:31:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:43.665 11:31:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:43.665 EAL: No free 2048 kB hugepages reported on node 1 00:26:46.948 Initializing NVMe Controllers 00:26:46.948 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:46.948 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:46.948 Initialization complete. Launching workers. 00:26:46.948 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 98569, failed: 0 00:26:46.948 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24874, failed to submit 73695 00:26:46.948 success 0, unsuccess 24874, failed 0 00:26:46.948 11:31:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:46.948 11:31:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:46.948 EAL: No free 2048 kB hugepages reported on node 1 00:26:50.225 Initializing NVMe Controllers 00:26:50.225 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:50.225 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:50.225 Initialization complete. Launching workers. 00:26:50.225 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 95661, failed: 0 00:26:50.225 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23890, failed to submit 71771 00:26:50.225 success 0, unsuccess 23890, failed 0 00:26:50.225 11:31:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:26:50.225 11:31:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:50.225 11:31:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:26:50.225 11:31:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:50.225 11:31:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:50.225 11:31:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:50.225 11:31:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:50.225 11:31:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:50.225 11:31:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:50.225 11:31:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:51.162 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:51.162 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:51.162 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:51.162 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:51.162 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:51.162 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:51.162 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:51.162 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:51.162 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:51.162 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:51.162 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:51.162 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:51.162 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:51.162 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:51.162 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:51.162 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:52.099 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:26:52.099 00:26:52.099 real 0m14.370s 00:26:52.099 user 0m6.485s 00:26:52.099 sys 0m3.264s 00:26:52.099 11:31:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:52.099 11:31:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:52.099 ************************************ 00:26:52.099 END TEST kernel_target_abort 00:26:52.099 ************************************ 00:26:52.099 11:31:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:26:52.099 11:31:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:26:52.099 11:31:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:26:52.099 11:31:18 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:52.099 11:31:18 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:26:52.099 11:31:18 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:52.099 11:31:18 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:26:52.099 11:31:18 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:52.099 11:31:18 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:52.099 rmmod nvme_tcp 00:26:52.099 rmmod nvme_fabrics 00:26:52.099 rmmod nvme_keyring 00:26:52.359 11:31:18 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:52.359 11:31:18 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:26:52.359 11:31:18 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:26:52.359 11:31:18 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 701125 ']' 00:26:52.359 11:31:18 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 701125 00:26:52.359 11:31:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 701125 ']' 00:26:52.359 11:31:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 701125 00:26:52.359 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (701125) - No such process 00:26:52.359 11:31:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 701125 is not found' 00:26:52.359 Process with pid 701125 is not found 00:26:52.359 11:31:18 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:26:52.359 11:31:18 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:53.306 Waiting for block devices as requested 00:26:53.306 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:26:53.563 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:53.563 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:53.846 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:53.846 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:53.846 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:53.846 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:54.106 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:54.106 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:54.106 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:54.365 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:54.365 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:54.365 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:54.365 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:54.624 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:54.624 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:54.624 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:54.882 11:31:20 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:54.882 11:31:20 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:54.882 11:31:20 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:54.882 11:31:20 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:54.882 11:31:20 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:54.882 11:31:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:54.882 11:31:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.786 11:31:22 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:56.786 00:26:56.786 real 0m38.136s 00:26:56.786 user 1m1.916s 00:26:56.786 sys 0m9.353s 00:26:56.786 11:31:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:56.786 11:31:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:56.787 ************************************ 00:26:56.787 END TEST nvmf_abort_qd_sizes 00:26:56.787 ************************************ 00:26:56.787 11:31:22 -- common/autotest_common.sh@1142 -- # return 0 00:26:56.787 11:31:22 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:26:56.787 11:31:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:56.787 11:31:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:56.787 11:31:22 -- common/autotest_common.sh@10 -- # set +x 00:26:56.787 ************************************ 00:26:56.787 START TEST keyring_file 00:26:56.787 ************************************ 00:26:56.787 11:31:22 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:26:57.046 * Looking for test storage... 00:26:57.046 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:26:57.046 11:31:22 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:26:57.046 11:31:22 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:57.046 11:31:22 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:26:57.046 11:31:22 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:57.046 11:31:22 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:57.046 11:31:22 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:57.046 11:31:22 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:57.046 11:31:22 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:57.046 11:31:22 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:57.046 11:31:22 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:57.046 11:31:22 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:57.046 11:31:22 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:57.046 11:31:22 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:57.046 11:31:22 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:57.046 11:31:22 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:57.046 11:31:22 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:57.046 11:31:22 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:57.046 11:31:22 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:57.046 11:31:22 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:57.046 11:31:22 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:57.046 11:31:22 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:57.046 11:31:22 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:57.046 11:31:22 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:57.046 11:31:22 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.046 11:31:22 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.046 11:31:22 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.046 11:31:22 keyring_file -- paths/export.sh@5 -- # export PATH 00:26:57.046 11:31:22 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.046 11:31:22 keyring_file -- nvmf/common.sh@47 -- # : 0 00:26:57.046 11:31:22 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:57.046 11:31:22 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:57.046 11:31:22 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:57.046 11:31:22 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:57.046 11:31:22 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:57.046 11:31:22 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:57.046 11:31:22 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:57.046 11:31:22 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:57.046 11:31:22 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:26:57.046 11:31:22 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:26:57.046 11:31:22 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:26:57.046 11:31:22 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:26:57.046 11:31:22 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:26:57.046 11:31:22 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:26:57.046 11:31:22 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:26:57.046 11:31:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:26:57.046 11:31:22 keyring_file -- keyring/common.sh@17 -- # name=key0 00:26:57.046 11:31:22 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:26:57.046 11:31:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:26:57.046 11:31:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:26:57.046 11:31:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.lpU2UkMhwn 00:26:57.046 11:31:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:26:57.046 11:31:22 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:26:57.046 11:31:22 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:26:57.046 11:31:22 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:26:57.046 11:31:22 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:26:57.046 11:31:22 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:26:57.046 11:31:22 keyring_file -- nvmf/common.sh@705 -- # python - 00:26:57.046 11:31:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.lpU2UkMhwn 00:26:57.046 11:31:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.lpU2UkMhwn 00:26:57.046 11:31:22 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.lpU2UkMhwn 00:26:57.046 11:31:22 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:26:57.046 11:31:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:26:57.046 11:31:22 keyring_file -- keyring/common.sh@17 -- # name=key1 00:26:57.046 11:31:22 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:26:57.046 11:31:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:26:57.046 11:31:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:26:57.046 11:31:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.hskhb7w2k6 00:26:57.046 11:31:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:26:57.046 11:31:22 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:26:57.046 11:31:22 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:26:57.046 11:31:22 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:26:57.046 11:31:22 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:26:57.046 11:31:22 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:26:57.046 11:31:22 keyring_file -- nvmf/common.sh@705 -- # python - 00:26:57.046 11:31:23 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.hskhb7w2k6 00:26:57.046 11:31:23 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.hskhb7w2k6 00:26:57.046 11:31:23 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.hskhb7w2k6 00:26:57.046 11:31:23 keyring_file -- keyring/file.sh@30 -- # tgtpid=706735 00:26:57.046 11:31:23 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:26:57.046 11:31:23 keyring_file -- keyring/file.sh@32 -- # waitforlisten 706735 00:26:57.046 11:31:23 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 706735 ']' 00:26:57.046 11:31:23 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:57.046 11:31:23 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:57.046 11:31:23 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:57.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:57.047 11:31:23 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:57.047 11:31:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:57.047 [2024-07-12 11:31:23.076686] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:26:57.047 [2024-07-12 11:31:23.076780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid706735 ] 00:26:57.047 EAL: No free 2048 kB hugepages reported on node 1 00:26:57.047 [2024-07-12 11:31:23.134272] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.308 [2024-07-12 11:31:23.242177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:57.619 11:31:23 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:57.619 11:31:23 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:26:57.619 11:31:23 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:26:57.619 11:31:23 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.619 11:31:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:57.619 [2024-07-12 11:31:23.472977] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:57.619 null0 00:26:57.619 [2024-07-12 11:31:23.505019] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:57.619 [2024-07-12 11:31:23.505464] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:57.619 [2024-07-12 11:31:23.513027] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:26:57.619 11:31:23 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.619 11:31:23 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:26:57.619 11:31:23 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:26:57.619 11:31:23 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:26:57.619 11:31:23 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:57.619 11:31:23 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:57.619 11:31:23 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:57.619 11:31:23 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:57.619 11:31:23 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:26:57.619 11:31:23 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.619 11:31:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:57.619 [2024-07-12 11:31:23.521042] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:26:57.619 request: 00:26:57.619 { 00:26:57.619 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:26:57.619 "secure_channel": false, 00:26:57.619 "listen_address": { 00:26:57.619 "trtype": "tcp", 00:26:57.619 "traddr": "127.0.0.1", 00:26:57.619 "trsvcid": "4420" 00:26:57.619 }, 00:26:57.619 "method": "nvmf_subsystem_add_listener", 00:26:57.619 "req_id": 1 00:26:57.619 } 00:26:57.619 Got JSON-RPC error response 00:26:57.619 response: 00:26:57.619 { 00:26:57.619 "code": -32602, 00:26:57.619 "message": "Invalid parameters" 00:26:57.619 } 00:26:57.619 11:31:23 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:57.619 11:31:23 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:26:57.619 11:31:23 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:57.619 11:31:23 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:57.619 11:31:23 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:57.619 11:31:23 keyring_file -- keyring/file.sh@46 -- # bperfpid=706746 00:26:57.619 11:31:23 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:26:57.619 11:31:23 keyring_file -- keyring/file.sh@48 -- # waitforlisten 706746 /var/tmp/bperf.sock 00:26:57.619 11:31:23 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 706746 ']' 00:26:57.619 11:31:23 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:57.619 11:31:23 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:57.619 11:31:23 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:57.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:57.619 11:31:23 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:57.619 11:31:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:57.619 [2024-07-12 11:31:23.565602] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:26:57.619 [2024-07-12 11:31:23.565665] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid706746 ] 00:26:57.619 EAL: No free 2048 kB hugepages reported on node 1 00:26:57.619 [2024-07-12 11:31:23.621984] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.877 [2024-07-12 11:31:23.729485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:57.877 11:31:23 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:57.877 11:31:23 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:26:57.877 11:31:23 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lpU2UkMhwn 00:26:57.877 11:31:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lpU2UkMhwn 00:26:58.135 11:31:24 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.hskhb7w2k6 00:26:58.135 11:31:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.hskhb7w2k6 00:26:58.392 11:31:24 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:26:58.392 11:31:24 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:26:58.392 11:31:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:58.392 11:31:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:58.392 11:31:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:58.649 11:31:24 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.lpU2UkMhwn == \/\t\m\p\/\t\m\p\.\l\p\U\2\U\k\M\h\w\n ]] 00:26:58.649 11:31:24 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:26:58.649 11:31:24 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:26:58.649 11:31:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:58.649 11:31:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:58.649 11:31:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:58.907 11:31:24 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.hskhb7w2k6 == \/\t\m\p\/\t\m\p\.\h\s\k\h\b\7\w\2\k\6 ]] 00:26:58.907 11:31:24 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:26:58.907 11:31:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:58.907 11:31:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:58.907 11:31:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:58.907 11:31:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:58.907 11:31:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:59.164 11:31:25 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:26:59.164 11:31:25 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:26:59.164 11:31:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:59.164 11:31:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:59.164 11:31:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:59.164 11:31:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:59.164 11:31:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:59.421 11:31:25 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:26:59.421 11:31:25 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:59.421 11:31:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:59.421 [2024-07-12 11:31:25.534583] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:59.679 nvme0n1 00:26:59.679 11:31:25 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:26:59.679 11:31:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:59.679 11:31:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:59.679 11:31:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:59.679 11:31:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:59.679 11:31:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:59.936 11:31:25 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:26:59.936 11:31:25 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:26:59.936 11:31:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:59.936 11:31:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:59.936 11:31:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:59.936 11:31:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:59.936 11:31:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:00.193 11:31:26 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:27:00.193 11:31:26 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:00.193 Running I/O for 1 seconds... 00:27:01.127 00:27:01.127 Latency(us) 00:27:01.127 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:01.127 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:27:01.127 nvme0n1 : 1.01 9771.65 38.17 0.00 0.00 13050.89 6747.78 24175.50 00:27:01.127 =================================================================================================================== 00:27:01.127 Total : 9771.65 38.17 0.00 0.00 13050.89 6747.78 24175.50 00:27:01.127 0 00:27:01.127 11:31:27 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:01.127 11:31:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:01.386 11:31:27 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:27:01.386 11:31:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:01.386 11:31:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:01.386 11:31:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:01.386 11:31:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:01.386 11:31:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:01.644 11:31:27 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:27:01.644 11:31:27 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:27:01.644 11:31:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:01.644 11:31:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:01.644 11:31:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:01.644 11:31:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:01.644 11:31:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:01.902 11:31:27 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:27:01.902 11:31:27 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:01.902 11:31:27 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:01.902 11:31:27 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:01.902 11:31:27 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:01.902 11:31:27 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:01.902 11:31:27 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:01.902 11:31:27 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:01.902 11:31:27 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:01.902 11:31:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:02.159 [2024-07-12 11:31:28.211805] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:02.159 [2024-07-12 11:31:28.212044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8649a0 (107): Transport endpoint is not connected 00:27:02.159 [2024-07-12 11:31:28.213037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8649a0 (9): Bad file descriptor 00:27:02.159 [2024-07-12 11:31:28.214035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:02.159 [2024-07-12 11:31:28.214054] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:02.159 [2024-07-12 11:31:28.214067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:02.159 request: 00:27:02.159 { 00:27:02.159 "name": "nvme0", 00:27:02.159 "trtype": "tcp", 00:27:02.159 "traddr": "127.0.0.1", 00:27:02.159 "adrfam": "ipv4", 00:27:02.159 "trsvcid": "4420", 00:27:02.159 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:02.159 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:02.159 "prchk_reftag": false, 00:27:02.159 "prchk_guard": false, 00:27:02.159 "hdgst": false, 00:27:02.159 "ddgst": false, 00:27:02.159 "psk": "key1", 00:27:02.159 "method": "bdev_nvme_attach_controller", 00:27:02.159 "req_id": 1 00:27:02.159 } 00:27:02.160 Got JSON-RPC error response 00:27:02.160 response: 00:27:02.160 { 00:27:02.160 "code": -5, 00:27:02.160 "message": "Input/output error" 00:27:02.160 } 00:27:02.160 11:31:28 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:02.160 11:31:28 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:02.160 11:31:28 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:02.160 11:31:28 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:02.160 11:31:28 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:27:02.160 11:31:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:02.160 11:31:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:02.160 11:31:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:02.160 11:31:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:02.160 11:31:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:02.417 11:31:28 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:27:02.417 11:31:28 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:27:02.417 11:31:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:02.417 11:31:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:02.417 11:31:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:02.417 11:31:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:02.417 11:31:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:02.675 11:31:28 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:27:02.675 11:31:28 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:27:02.675 11:31:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:02.933 11:31:28 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:27:02.933 11:31:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:27:03.191 11:31:29 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:27:03.191 11:31:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:03.191 11:31:29 keyring_file -- keyring/file.sh@77 -- # jq length 00:27:03.449 11:31:29 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:27:03.449 11:31:29 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.lpU2UkMhwn 00:27:03.449 11:31:29 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.lpU2UkMhwn 00:27:03.449 11:31:29 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:03.449 11:31:29 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.lpU2UkMhwn 00:27:03.449 11:31:29 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:03.449 11:31:29 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:03.449 11:31:29 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:03.449 11:31:29 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:03.449 11:31:29 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lpU2UkMhwn 00:27:03.449 11:31:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lpU2UkMhwn 00:27:03.707 [2024-07-12 11:31:29.698168] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.lpU2UkMhwn': 0100660 00:27:03.707 [2024-07-12 11:31:29.698200] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:27:03.707 request: 00:27:03.707 { 00:27:03.707 "name": "key0", 00:27:03.707 "path": "/tmp/tmp.lpU2UkMhwn", 00:27:03.707 "method": "keyring_file_add_key", 00:27:03.707 "req_id": 1 00:27:03.707 } 00:27:03.707 Got JSON-RPC error response 00:27:03.707 response: 00:27:03.707 { 00:27:03.707 "code": -1, 00:27:03.707 "message": "Operation not permitted" 00:27:03.707 } 00:27:03.707 11:31:29 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:03.707 11:31:29 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:03.707 11:31:29 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:03.707 11:31:29 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:03.707 11:31:29 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.lpU2UkMhwn 00:27:03.707 11:31:29 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lpU2UkMhwn 00:27:03.707 11:31:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lpU2UkMhwn 00:27:03.964 11:31:29 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.lpU2UkMhwn 00:27:03.964 11:31:29 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:27:03.964 11:31:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:03.964 11:31:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:03.964 11:31:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:03.964 11:31:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:03.964 11:31:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:04.221 11:31:30 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:27:04.221 11:31:30 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:04.221 11:31:30 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:04.221 11:31:30 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:04.221 11:31:30 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:04.221 11:31:30 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:04.221 11:31:30 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:04.221 11:31:30 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:04.221 11:31:30 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:04.221 11:31:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:04.479 [2024-07-12 11:31:30.448214] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.lpU2UkMhwn': No such file or directory 00:27:04.479 [2024-07-12 11:31:30.448251] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:27:04.479 [2024-07-12 11:31:30.448293] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:27:04.479 [2024-07-12 11:31:30.448304] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:04.479 [2024-07-12 11:31:30.448315] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:27:04.479 request: 00:27:04.479 { 00:27:04.479 "name": "nvme0", 00:27:04.479 "trtype": "tcp", 00:27:04.479 "traddr": "127.0.0.1", 00:27:04.479 "adrfam": "ipv4", 00:27:04.479 "trsvcid": "4420", 00:27:04.479 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:04.479 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:04.479 "prchk_reftag": false, 00:27:04.479 "prchk_guard": false, 00:27:04.479 "hdgst": false, 00:27:04.479 "ddgst": false, 00:27:04.479 "psk": "key0", 00:27:04.479 "method": "bdev_nvme_attach_controller", 00:27:04.479 "req_id": 1 00:27:04.479 } 00:27:04.479 Got JSON-RPC error response 00:27:04.479 response: 00:27:04.479 { 00:27:04.479 "code": -19, 00:27:04.479 "message": "No such device" 00:27:04.479 } 00:27:04.479 11:31:30 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:04.479 11:31:30 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:04.479 11:31:30 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:04.479 11:31:30 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:04.479 11:31:30 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:27:04.479 11:31:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:04.737 11:31:30 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:04.737 11:31:30 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:04.737 11:31:30 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:04.737 11:31:30 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:04.737 11:31:30 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:04.737 11:31:30 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:04.737 11:31:30 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.HUdp4TQ6j3 00:27:04.737 11:31:30 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:04.737 11:31:30 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:04.737 11:31:30 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:04.737 11:31:30 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:04.737 11:31:30 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:04.737 11:31:30 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:04.737 11:31:30 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:04.737 11:31:30 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.HUdp4TQ6j3 00:27:04.737 11:31:30 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.HUdp4TQ6j3 00:27:04.737 11:31:30 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.HUdp4TQ6j3 00:27:04.737 11:31:30 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HUdp4TQ6j3 00:27:04.737 11:31:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HUdp4TQ6j3 00:27:04.995 11:31:31 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:04.995 11:31:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:05.253 nvme0n1 00:27:05.253 11:31:31 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:27:05.253 11:31:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:05.253 11:31:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:05.253 11:31:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:05.253 11:31:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:05.253 11:31:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:05.510 11:31:31 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:27:05.510 11:31:31 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:27:05.510 11:31:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:05.767 11:31:31 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:27:05.767 11:31:31 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:27:05.767 11:31:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:05.767 11:31:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:05.767 11:31:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:06.024 11:31:32 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:27:06.024 11:31:32 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:27:06.024 11:31:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:06.024 11:31:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:06.024 11:31:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:06.024 11:31:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:06.024 11:31:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:06.281 11:31:32 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:27:06.281 11:31:32 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:06.281 11:31:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:06.537 11:31:32 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:27:06.537 11:31:32 keyring_file -- keyring/file.sh@104 -- # jq length 00:27:06.537 11:31:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:06.793 11:31:32 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:27:06.793 11:31:32 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HUdp4TQ6j3 00:27:06.793 11:31:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HUdp4TQ6j3 00:27:07.049 11:31:33 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.hskhb7w2k6 00:27:07.049 11:31:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.hskhb7w2k6 00:27:07.306 11:31:33 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:07.306 11:31:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:07.563 nvme0n1 00:27:07.563 11:31:33 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:27:07.563 11:31:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:27:07.820 11:31:33 keyring_file -- keyring/file.sh@112 -- # config='{ 00:27:07.820 "subsystems": [ 00:27:07.820 { 00:27:07.820 "subsystem": "keyring", 00:27:07.820 "config": [ 00:27:07.820 { 00:27:07.820 "method": "keyring_file_add_key", 00:27:07.820 "params": { 00:27:07.820 "name": "key0", 00:27:07.820 "path": "/tmp/tmp.HUdp4TQ6j3" 00:27:07.820 } 00:27:07.820 }, 00:27:07.820 { 00:27:07.820 "method": "keyring_file_add_key", 00:27:07.820 "params": { 00:27:07.820 "name": "key1", 00:27:07.820 "path": "/tmp/tmp.hskhb7w2k6" 00:27:07.820 } 00:27:07.820 } 00:27:07.820 ] 00:27:07.820 }, 00:27:07.820 { 00:27:07.820 "subsystem": "iobuf", 00:27:07.820 "config": [ 00:27:07.820 { 00:27:07.820 "method": "iobuf_set_options", 00:27:07.820 "params": { 00:27:07.820 "small_pool_count": 8192, 00:27:07.820 "large_pool_count": 1024, 00:27:07.820 "small_bufsize": 8192, 00:27:07.820 "large_bufsize": 135168 00:27:07.821 } 00:27:07.821 } 00:27:07.821 ] 00:27:07.821 }, 00:27:07.821 { 00:27:07.821 "subsystem": "sock", 00:27:07.821 "config": [ 00:27:07.821 { 00:27:07.821 "method": "sock_set_default_impl", 00:27:07.821 "params": { 00:27:07.821 "impl_name": "posix" 00:27:07.821 } 00:27:07.821 }, 00:27:07.821 { 00:27:07.821 "method": "sock_impl_set_options", 00:27:07.821 "params": { 00:27:07.821 "impl_name": "ssl", 00:27:07.821 "recv_buf_size": 4096, 00:27:07.821 "send_buf_size": 4096, 00:27:07.821 "enable_recv_pipe": true, 00:27:07.821 "enable_quickack": false, 00:27:07.821 "enable_placement_id": 0, 00:27:07.821 "enable_zerocopy_send_server": true, 00:27:07.821 "enable_zerocopy_send_client": false, 00:27:07.821 "zerocopy_threshold": 0, 00:27:07.821 "tls_version": 0, 00:27:07.821 "enable_ktls": false 00:27:07.821 } 00:27:07.821 }, 00:27:07.821 { 00:27:07.821 "method": "sock_impl_set_options", 00:27:07.821 "params": { 00:27:07.821 "impl_name": "posix", 00:27:07.821 "recv_buf_size": 2097152, 00:27:07.821 "send_buf_size": 2097152, 00:27:07.821 "enable_recv_pipe": true, 00:27:07.821 "enable_quickack": false, 00:27:07.821 "enable_placement_id": 0, 00:27:07.821 "enable_zerocopy_send_server": true, 00:27:07.821 "enable_zerocopy_send_client": false, 00:27:07.821 "zerocopy_threshold": 0, 00:27:07.821 "tls_version": 0, 00:27:07.821 "enable_ktls": false 00:27:07.821 } 00:27:07.821 } 00:27:07.821 ] 00:27:07.821 }, 00:27:07.821 { 00:27:07.821 "subsystem": "vmd", 00:27:07.821 "config": [] 00:27:07.821 }, 00:27:07.821 { 00:27:07.821 "subsystem": "accel", 00:27:07.821 "config": [ 00:27:07.821 { 00:27:07.821 "method": "accel_set_options", 00:27:07.821 "params": { 00:27:07.821 "small_cache_size": 128, 00:27:07.821 "large_cache_size": 16, 00:27:07.821 "task_count": 2048, 00:27:07.821 "sequence_count": 2048, 00:27:07.821 "buf_count": 2048 00:27:07.821 } 00:27:07.821 } 00:27:07.821 ] 00:27:07.821 }, 00:27:07.821 { 00:27:07.821 "subsystem": "bdev", 00:27:07.821 "config": [ 00:27:07.821 { 00:27:07.821 "method": "bdev_set_options", 00:27:07.821 "params": { 00:27:07.821 "bdev_io_pool_size": 65535, 00:27:07.821 "bdev_io_cache_size": 256, 00:27:07.821 "bdev_auto_examine": true, 00:27:07.821 "iobuf_small_cache_size": 128, 00:27:07.821 "iobuf_large_cache_size": 16 00:27:07.821 } 00:27:07.821 }, 00:27:07.821 { 00:27:07.821 "method": "bdev_raid_set_options", 00:27:07.821 "params": { 00:27:07.821 "process_window_size_kb": 1024 00:27:07.821 } 00:27:07.821 }, 00:27:07.821 { 00:27:07.821 "method": "bdev_iscsi_set_options", 00:27:07.821 "params": { 00:27:07.821 "timeout_sec": 30 00:27:07.821 } 00:27:07.821 }, 00:27:07.821 { 00:27:07.821 "method": "bdev_nvme_set_options", 00:27:07.821 "params": { 00:27:07.821 "action_on_timeout": "none", 00:27:07.821 "timeout_us": 0, 00:27:07.821 "timeout_admin_us": 0, 00:27:07.821 "keep_alive_timeout_ms": 10000, 00:27:07.821 "arbitration_burst": 0, 00:27:07.821 "low_priority_weight": 0, 00:27:07.821 "medium_priority_weight": 0, 00:27:07.821 "high_priority_weight": 0, 00:27:07.821 "nvme_adminq_poll_period_us": 10000, 00:27:07.821 "nvme_ioq_poll_period_us": 0, 00:27:07.821 "io_queue_requests": 512, 00:27:07.821 "delay_cmd_submit": true, 00:27:07.821 "transport_retry_count": 4, 00:27:07.821 "bdev_retry_count": 3, 00:27:07.821 "transport_ack_timeout": 0, 00:27:07.821 "ctrlr_loss_timeout_sec": 0, 00:27:07.821 "reconnect_delay_sec": 0, 00:27:07.821 "fast_io_fail_timeout_sec": 0, 00:27:07.821 "disable_auto_failback": false, 00:27:07.821 "generate_uuids": false, 00:27:07.821 "transport_tos": 0, 00:27:07.821 "nvme_error_stat": false, 00:27:07.821 "rdma_srq_size": 0, 00:27:07.821 "io_path_stat": false, 00:27:07.821 "allow_accel_sequence": false, 00:27:07.821 "rdma_max_cq_size": 0, 00:27:07.821 "rdma_cm_event_timeout_ms": 0, 00:27:07.821 "dhchap_digests": [ 00:27:07.821 "sha256", 00:27:07.821 "sha384", 00:27:07.821 "sha512" 00:27:07.821 ], 00:27:07.821 "dhchap_dhgroups": [ 00:27:07.821 "null", 00:27:07.821 "ffdhe2048", 00:27:07.821 "ffdhe3072", 00:27:07.821 "ffdhe4096", 00:27:07.821 "ffdhe6144", 00:27:07.821 "ffdhe8192" 00:27:07.821 ] 00:27:07.821 } 00:27:07.821 }, 00:27:07.821 { 00:27:07.821 "method": "bdev_nvme_attach_controller", 00:27:07.821 "params": { 00:27:07.821 "name": "nvme0", 00:27:07.821 "trtype": "TCP", 00:27:07.821 "adrfam": "IPv4", 00:27:07.821 "traddr": "127.0.0.1", 00:27:07.821 "trsvcid": "4420", 00:27:07.821 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:07.821 "prchk_reftag": false, 00:27:07.821 "prchk_guard": false, 00:27:07.821 "ctrlr_loss_timeout_sec": 0, 00:27:07.821 "reconnect_delay_sec": 0, 00:27:07.821 "fast_io_fail_timeout_sec": 0, 00:27:07.821 "psk": "key0", 00:27:07.821 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:07.821 "hdgst": false, 00:27:07.821 "ddgst": false 00:27:07.821 } 00:27:07.821 }, 00:27:07.821 { 00:27:07.821 "method": "bdev_nvme_set_hotplug", 00:27:07.821 "params": { 00:27:07.821 "period_us": 100000, 00:27:07.821 "enable": false 00:27:07.821 } 00:27:07.821 }, 00:27:07.821 { 00:27:07.821 "method": "bdev_wait_for_examine" 00:27:07.821 } 00:27:07.821 ] 00:27:07.821 }, 00:27:07.821 { 00:27:07.821 "subsystem": "nbd", 00:27:07.821 "config": [] 00:27:07.821 } 00:27:07.821 ] 00:27:07.821 }' 00:27:07.821 11:31:33 keyring_file -- keyring/file.sh@114 -- # killprocess 706746 00:27:07.821 11:31:33 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 706746 ']' 00:27:07.821 11:31:33 keyring_file -- common/autotest_common.sh@952 -- # kill -0 706746 00:27:07.821 11:31:33 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:07.821 11:31:33 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:07.821 11:31:33 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 706746 00:27:07.821 11:31:33 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:07.821 11:31:33 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:07.821 11:31:33 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 706746' 00:27:07.821 killing process with pid 706746 00:27:07.821 11:31:33 keyring_file -- common/autotest_common.sh@967 -- # kill 706746 00:27:07.821 Received shutdown signal, test time was about 1.000000 seconds 00:27:07.821 00:27:07.821 Latency(us) 00:27:07.821 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:07.821 =================================================================================================================== 00:27:07.821 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:07.821 11:31:33 keyring_file -- common/autotest_common.sh@972 -- # wait 706746 00:27:08.079 11:31:34 keyring_file -- keyring/file.sh@117 -- # bperfpid=708165 00:27:08.079 11:31:34 keyring_file -- keyring/file.sh@119 -- # waitforlisten 708165 /var/tmp/bperf.sock 00:27:08.079 11:31:34 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 708165 ']' 00:27:08.079 11:31:34 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:08.079 11:31:34 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:27:08.079 11:31:34 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:08.079 11:31:34 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:08.079 11:31:34 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:27:08.079 "subsystems": [ 00:27:08.079 { 00:27:08.079 "subsystem": "keyring", 00:27:08.079 "config": [ 00:27:08.079 { 00:27:08.079 "method": "keyring_file_add_key", 00:27:08.079 "params": { 00:27:08.079 "name": "key0", 00:27:08.079 "path": "/tmp/tmp.HUdp4TQ6j3" 00:27:08.079 } 00:27:08.079 }, 00:27:08.079 { 00:27:08.079 "method": "keyring_file_add_key", 00:27:08.079 "params": { 00:27:08.079 "name": "key1", 00:27:08.079 "path": "/tmp/tmp.hskhb7w2k6" 00:27:08.079 } 00:27:08.079 } 00:27:08.079 ] 00:27:08.079 }, 00:27:08.079 { 00:27:08.079 "subsystem": "iobuf", 00:27:08.079 "config": [ 00:27:08.079 { 00:27:08.079 "method": "iobuf_set_options", 00:27:08.079 "params": { 00:27:08.079 "small_pool_count": 8192, 00:27:08.079 "large_pool_count": 1024, 00:27:08.079 "small_bufsize": 8192, 00:27:08.079 "large_bufsize": 135168 00:27:08.079 } 00:27:08.079 } 00:27:08.079 ] 00:27:08.079 }, 00:27:08.079 { 00:27:08.079 "subsystem": "sock", 00:27:08.079 "config": [ 00:27:08.079 { 00:27:08.079 "method": "sock_set_default_impl", 00:27:08.079 "params": { 00:27:08.079 "impl_name": "posix" 00:27:08.079 } 00:27:08.079 }, 00:27:08.079 { 00:27:08.079 "method": "sock_impl_set_options", 00:27:08.079 "params": { 00:27:08.079 "impl_name": "ssl", 00:27:08.079 "recv_buf_size": 4096, 00:27:08.079 "send_buf_size": 4096, 00:27:08.079 "enable_recv_pipe": true, 00:27:08.079 "enable_quickack": false, 00:27:08.079 "enable_placement_id": 0, 00:27:08.079 "enable_zerocopy_send_server": true, 00:27:08.079 "enable_zerocopy_send_client": false, 00:27:08.079 "zerocopy_threshold": 0, 00:27:08.079 "tls_version": 0, 00:27:08.079 "enable_ktls": false 00:27:08.079 } 00:27:08.079 }, 00:27:08.079 { 00:27:08.079 "method": "sock_impl_set_options", 00:27:08.079 "params": { 00:27:08.079 "impl_name": "posix", 00:27:08.079 "recv_buf_size": 2097152, 00:27:08.079 "send_buf_size": 2097152, 00:27:08.079 "enable_recv_pipe": true, 00:27:08.079 "enable_quickack": false, 00:27:08.079 "enable_placement_id": 0, 00:27:08.079 "enable_zerocopy_send_server": true, 00:27:08.079 "enable_zerocopy_send_client": false, 00:27:08.079 "zerocopy_threshold": 0, 00:27:08.080 "tls_version": 0, 00:27:08.080 "enable_ktls": false 00:27:08.080 } 00:27:08.080 } 00:27:08.080 ] 00:27:08.080 }, 00:27:08.080 { 00:27:08.080 "subsystem": "vmd", 00:27:08.080 "config": [] 00:27:08.080 }, 00:27:08.080 { 00:27:08.080 "subsystem": "accel", 00:27:08.080 "config": [ 00:27:08.080 { 00:27:08.080 "method": "accel_set_options", 00:27:08.080 "params": { 00:27:08.080 "small_cache_size": 128, 00:27:08.080 "large_cache_size": 16, 00:27:08.080 "task_count": 2048, 00:27:08.080 "sequence_count": 2048, 00:27:08.080 "buf_count": 2048 00:27:08.080 } 00:27:08.080 } 00:27:08.080 ] 00:27:08.080 }, 00:27:08.080 { 00:27:08.080 "subsystem": "bdev", 00:27:08.080 "config": [ 00:27:08.080 { 00:27:08.080 "method": "bdev_set_options", 00:27:08.080 "params": { 00:27:08.080 "bdev_io_pool_size": 65535, 00:27:08.080 "bdev_io_cache_size": 256, 00:27:08.080 "bdev_auto_examine": true, 00:27:08.080 "iobuf_small_cache_size": 128, 00:27:08.080 "iobuf_large_cache_size": 16 00:27:08.080 } 00:27:08.080 }, 00:27:08.080 { 00:27:08.080 "method": "bdev_raid_set_options", 00:27:08.080 "params": { 00:27:08.080 "process_window_size_kb": 1024 00:27:08.080 } 00:27:08.080 }, 00:27:08.080 { 00:27:08.080 "method": "bdev_iscsi_set_options", 00:27:08.080 "params": { 00:27:08.080 "timeout_sec": 30 00:27:08.080 } 00:27:08.080 }, 00:27:08.080 { 00:27:08.080 "method": "bdev_nvme_set_options", 00:27:08.080 "params": { 00:27:08.080 "action_on_timeout": "none", 00:27:08.080 "timeout_us": 0, 00:27:08.080 "timeout_admin_us": 0, 00:27:08.080 "keep_alive_timeout_ms": 10000, 00:27:08.080 "arbitration_burst": 0, 00:27:08.080 "low_priority_weight": 0, 00:27:08.080 "medium_priority_weight": 0, 00:27:08.080 "high_priority_weight": 0, 00:27:08.080 "nvme_adminq_poll_period_us": 10000, 00:27:08.080 "nvme_ioq_poll_period_us": 0, 00:27:08.080 "io_queue_requests": 512, 00:27:08.080 "delay_cmd_submit": true, 00:27:08.080 "transport_retry_count": 4, 00:27:08.080 "bdev_retry_count": 3, 00:27:08.080 "transport_ack_timeout": 0, 00:27:08.080 "ctrlr_loss_timeout_sec": 0, 00:27:08.080 "reconnect_delay_sec": 0, 00:27:08.080 "fast_io_fail_timeout_sec": 0, 00:27:08.080 "disable_auto_failback": false, 00:27:08.080 "generate_uuids": false, 00:27:08.080 "transport_tos": 0, 00:27:08.080 "nvme_error_stat": false, 00:27:08.080 "rdma_srq_size": 0, 00:27:08.080 "io_path_stat": false, 00:27:08.080 "allow_accel_sequence": false, 00:27:08.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:08.080 "rdma_max_cq_size": 0, 00:27:08.080 "rdma_cm_event_timeout_ms": 0, 00:27:08.080 "dhchap_digests": [ 00:27:08.080 "sha256", 00:27:08.080 "sha384", 00:27:08.080 "sha512" 00:27:08.080 ], 00:27:08.080 "dhchap_dhgroups": [ 00:27:08.080 "null", 00:27:08.080 "ffdhe2048", 00:27:08.080 "ffdhe3072", 00:27:08.080 "ffdhe4096", 00:27:08.080 "ffdhe6144", 00:27:08.080 "ffdhe8192" 00:27:08.080 ] 00:27:08.080 } 00:27:08.080 }, 00:27:08.080 { 00:27:08.080 "method": "bdev_nvme_attach_controller", 00:27:08.080 "params": { 00:27:08.080 "name": "nvme0", 00:27:08.080 "trtype": "TCP", 00:27:08.080 "adrfam": "IPv4", 00:27:08.080 "traddr": "127.0.0.1", 00:27:08.080 "trsvcid": "4420", 00:27:08.080 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:08.080 "prchk_reftag": false, 00:27:08.080 "prchk_guard": false, 00:27:08.080 "ctrlr_loss_timeout_sec": 0, 00:27:08.080 "reconnect_delay_sec": 0, 00:27:08.080 "fast_io_fail_timeout_sec": 0, 00:27:08.080 "psk": "key0", 00:27:08.080 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:08.080 "hdgst": false, 00:27:08.080 "ddgst": false 00:27:08.080 } 00:27:08.080 }, 00:27:08.080 { 00:27:08.080 "method": "bdev_nvme_set_hotplug", 00:27:08.080 "params": { 00:27:08.080 "period_us": 100000, 00:27:08.080 "enable": false 00:27:08.080 } 00:27:08.080 }, 00:27:08.080 { 00:27:08.080 "method": "bdev_wait_for_examine" 00:27:08.080 } 00:27:08.080 ] 00:27:08.080 }, 00:27:08.080 { 00:27:08.080 "subsystem": "nbd", 00:27:08.080 "config": [] 00:27:08.080 } 00:27:08.080 ] 00:27:08.080 }' 00:27:08.080 11:31:34 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:08.080 11:31:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:08.337 [2024-07-12 11:31:34.220212] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:27:08.337 [2024-07-12 11:31:34.220305] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid708165 ] 00:27:08.337 EAL: No free 2048 kB hugepages reported on node 1 00:27:08.337 [2024-07-12 11:31:34.276799] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.337 [2024-07-12 11:31:34.387039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:08.594 [2024-07-12 11:31:34.577133] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:09.158 11:31:35 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:09.158 11:31:35 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:09.158 11:31:35 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:27:09.158 11:31:35 keyring_file -- keyring/file.sh@120 -- # jq length 00:27:09.158 11:31:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:09.414 11:31:35 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:27:09.414 11:31:35 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:27:09.414 11:31:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:09.414 11:31:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:09.414 11:31:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:09.414 11:31:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:09.414 11:31:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:09.671 11:31:35 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:27:09.671 11:31:35 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:27:09.671 11:31:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:09.671 11:31:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:09.671 11:31:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:09.671 11:31:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:09.671 11:31:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:09.928 11:31:35 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:27:09.928 11:31:35 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:27:09.928 11:31:35 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:27:09.928 11:31:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:27:10.186 11:31:36 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:27:10.186 11:31:36 keyring_file -- keyring/file.sh@1 -- # cleanup 00:27:10.186 11:31:36 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.HUdp4TQ6j3 /tmp/tmp.hskhb7w2k6 00:27:10.186 11:31:36 keyring_file -- keyring/file.sh@20 -- # killprocess 708165 00:27:10.186 11:31:36 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 708165 ']' 00:27:10.186 11:31:36 keyring_file -- common/autotest_common.sh@952 -- # kill -0 708165 00:27:10.186 11:31:36 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:10.186 11:31:36 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:10.186 11:31:36 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 708165 00:27:10.186 11:31:36 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:10.186 11:31:36 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:10.186 11:31:36 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 708165' 00:27:10.186 killing process with pid 708165 00:27:10.186 11:31:36 keyring_file -- common/autotest_common.sh@967 -- # kill 708165 00:27:10.186 Received shutdown signal, test time was about 1.000000 seconds 00:27:10.186 00:27:10.186 Latency(us) 00:27:10.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:10.186 =================================================================================================================== 00:27:10.186 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:10.186 11:31:36 keyring_file -- common/autotest_common.sh@972 -- # wait 708165 00:27:10.444 11:31:36 keyring_file -- keyring/file.sh@21 -- # killprocess 706735 00:27:10.444 11:31:36 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 706735 ']' 00:27:10.444 11:31:36 keyring_file -- common/autotest_common.sh@952 -- # kill -0 706735 00:27:10.444 11:31:36 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:10.444 11:31:36 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:10.444 11:31:36 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 706735 00:27:10.444 11:31:36 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:10.444 11:31:36 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:10.444 11:31:36 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 706735' 00:27:10.444 killing process with pid 706735 00:27:10.444 11:31:36 keyring_file -- common/autotest_common.sh@967 -- # kill 706735 00:27:10.444 [2024-07-12 11:31:36.456395] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:10.444 11:31:36 keyring_file -- common/autotest_common.sh@972 -- # wait 706735 00:27:11.011 00:27:11.011 real 0m14.031s 00:27:11.011 user 0m35.165s 00:27:11.011 sys 0m3.138s 00:27:11.011 11:31:36 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:11.011 11:31:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:11.011 ************************************ 00:27:11.011 END TEST keyring_file 00:27:11.011 ************************************ 00:27:11.011 11:31:36 -- common/autotest_common.sh@1142 -- # return 0 00:27:11.011 11:31:36 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:27:11.011 11:31:36 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:27:11.011 11:31:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:11.011 11:31:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:11.011 11:31:36 -- common/autotest_common.sh@10 -- # set +x 00:27:11.011 ************************************ 00:27:11.011 START TEST keyring_linux 00:27:11.011 ************************************ 00:27:11.011 11:31:36 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:27:11.011 * Looking for test storage... 00:27:11.011 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:11.011 11:31:36 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:11.011 11:31:36 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:11.011 11:31:36 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:27:11.011 11:31:36 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:11.011 11:31:36 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:11.011 11:31:36 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:11.011 11:31:36 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:11.011 11:31:36 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:11.011 11:31:36 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:11.011 11:31:36 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:11.011 11:31:36 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:11.011 11:31:36 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:11.011 11:31:36 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:11.011 11:31:36 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:11.011 11:31:36 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:11.011 11:31:36 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:11.011 11:31:36 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:11.011 11:31:36 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:11.011 11:31:36 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:11.011 11:31:36 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:11.011 11:31:36 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:11.011 11:31:36 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:11.011 11:31:36 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:11.011 11:31:36 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.011 11:31:36 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.011 11:31:36 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.011 11:31:36 keyring_linux -- paths/export.sh@5 -- # export PATH 00:27:11.011 11:31:36 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.011 11:31:36 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:27:11.011 11:31:36 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:11.011 11:31:36 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:11.011 11:31:36 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:11.011 11:31:36 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:11.011 11:31:36 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:11.011 11:31:36 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:11.011 11:31:36 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:11.011 11:31:36 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:11.011 11:31:37 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:11.011 11:31:37 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:11.011 11:31:37 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:11.011 11:31:37 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:27:11.011 11:31:37 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:27:11.011 11:31:37 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:27:11.011 11:31:37 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:27:11.011 11:31:37 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:11.011 11:31:37 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:27:11.011 11:31:37 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:11.011 11:31:37 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:11.011 11:31:37 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:27:11.011 11:31:37 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:11.011 11:31:37 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:11.011 11:31:37 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:11.011 11:31:37 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:11.011 11:31:37 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:11.011 11:31:37 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:11.011 11:31:37 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:11.011 11:31:37 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:27:11.011 11:31:37 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:27:11.011 /tmp/:spdk-test:key0 00:27:11.011 11:31:37 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:27:11.011 11:31:37 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:11.011 11:31:37 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:27:11.011 11:31:37 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:11.011 11:31:37 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:11.011 11:31:37 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:27:11.011 11:31:37 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:11.011 11:31:37 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:11.011 11:31:37 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:11.011 11:31:37 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:11.011 11:31:37 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:11.011 11:31:37 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:11.011 11:31:37 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:11.011 11:31:37 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:27:11.011 11:31:37 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:27:11.011 /tmp/:spdk-test:key1 00:27:11.011 11:31:37 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=708515 00:27:11.011 11:31:37 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:11.011 11:31:37 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 708515 00:27:11.011 11:31:37 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 708515 ']' 00:27:11.012 11:31:37 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:11.012 11:31:37 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:11.012 11:31:37 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:11.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:11.012 11:31:37 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:11.012 11:31:37 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:11.012 [2024-07-12 11:31:37.130293] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:27:11.012 [2024-07-12 11:31:37.130372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid708515 ] 00:27:11.270 EAL: No free 2048 kB hugepages reported on node 1 00:27:11.270 [2024-07-12 11:31:37.187167] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.270 [2024-07-12 11:31:37.295539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:11.531 11:31:37 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:11.531 11:31:37 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:27:11.531 11:31:37 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:27:11.531 11:31:37 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.531 11:31:37 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:11.531 [2024-07-12 11:31:37.548951] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:11.531 null0 00:27:11.531 [2024-07-12 11:31:37.581016] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:11.531 [2024-07-12 11:31:37.581467] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:11.531 11:31:37 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.531 11:31:37 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:27:11.531 124161108 00:27:11.531 11:31:37 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:27:11.531 445845533 00:27:11.531 11:31:37 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=708647 00:27:11.531 11:31:37 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:27:11.531 11:31:37 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 708647 /var/tmp/bperf.sock 00:27:11.531 11:31:37 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 708647 ']' 00:27:11.531 11:31:37 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:11.531 11:31:37 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:11.531 11:31:37 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:11.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:11.531 11:31:37 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:11.531 11:31:37 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:11.531 [2024-07-12 11:31:37.642830] Starting SPDK v24.09-pre git sha1 a7a09b9a0 / DPDK 24.03.0 initialization... 00:27:11.531 [2024-07-12 11:31:37.642930] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid708647 ] 00:27:11.789 EAL: No free 2048 kB hugepages reported on node 1 00:27:11.789 [2024-07-12 11:31:37.698634] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.789 [2024-07-12 11:31:37.803803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:11.789 11:31:37 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:11.789 11:31:37 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:27:11.789 11:31:37 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:27:11.789 11:31:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:27:12.046 11:31:38 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:27:12.046 11:31:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:12.333 11:31:38 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:12.333 11:31:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:12.593 [2024-07-12 11:31:38.654145] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:12.851 nvme0n1 00:27:12.851 11:31:38 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:27:12.851 11:31:38 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:27:12.851 11:31:38 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:12.851 11:31:38 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:12.851 11:31:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:12.851 11:31:38 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:12.851 11:31:38 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:27:12.851 11:31:38 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:12.851 11:31:38 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:27:12.851 11:31:38 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:27:12.851 11:31:38 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:12.851 11:31:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:12.851 11:31:38 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:27:13.109 11:31:39 keyring_linux -- keyring/linux.sh@25 -- # sn=124161108 00:27:13.109 11:31:39 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:27:13.109 11:31:39 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:13.109 11:31:39 keyring_linux -- keyring/linux.sh@26 -- # [[ 124161108 == \1\2\4\1\6\1\1\0\8 ]] 00:27:13.109 11:31:39 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 124161108 00:27:13.366 11:31:39 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:27:13.367 11:31:39 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:13.367 Running I/O for 1 seconds... 00:27:14.301 00:27:14.301 Latency(us) 00:27:14.301 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:14.301 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:14.301 nvme0n1 : 1.01 10655.36 41.62 0.00 0.00 11929.59 10340.12 22622.06 00:27:14.301 =================================================================================================================== 00:27:14.301 Total : 10655.36 41.62 0.00 0.00 11929.59 10340.12 22622.06 00:27:14.301 0 00:27:14.301 11:31:40 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:14.301 11:31:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:14.559 11:31:40 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:27:14.559 11:31:40 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:27:14.559 11:31:40 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:14.559 11:31:40 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:14.559 11:31:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:14.559 11:31:40 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:14.818 11:31:40 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:27:14.818 11:31:40 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:14.818 11:31:40 keyring_linux -- keyring/linux.sh@23 -- # return 00:27:14.818 11:31:40 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:14.818 11:31:40 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:27:14.818 11:31:40 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:14.818 11:31:40 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:14.818 11:31:40 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:14.818 11:31:40 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:14.818 11:31:40 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:14.818 11:31:40 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:14.818 11:31:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:15.075 [2024-07-12 11:31:41.114647] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:15.075 [2024-07-12 11:31:41.115102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba23f0 (107): Transport endpoint is not connected 00:27:15.075 [2024-07-12 11:31:41.116092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba23f0 (9): Bad file descriptor 00:27:15.075 [2024-07-12 11:31:41.117092] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:15.075 [2024-07-12 11:31:41.117112] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:15.075 [2024-07-12 11:31:41.117126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:15.075 request: 00:27:15.075 { 00:27:15.075 "name": "nvme0", 00:27:15.075 "trtype": "tcp", 00:27:15.075 "traddr": "127.0.0.1", 00:27:15.075 "adrfam": "ipv4", 00:27:15.075 "trsvcid": "4420", 00:27:15.075 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:15.075 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:15.075 "prchk_reftag": false, 00:27:15.075 "prchk_guard": false, 00:27:15.075 "hdgst": false, 00:27:15.075 "ddgst": false, 00:27:15.075 "psk": ":spdk-test:key1", 00:27:15.075 "method": "bdev_nvme_attach_controller", 00:27:15.075 "req_id": 1 00:27:15.075 } 00:27:15.075 Got JSON-RPC error response 00:27:15.075 response: 00:27:15.075 { 00:27:15.075 "code": -5, 00:27:15.075 "message": "Input/output error" 00:27:15.075 } 00:27:15.075 11:31:41 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:27:15.075 11:31:41 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:15.075 11:31:41 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:15.075 11:31:41 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:15.075 11:31:41 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:27:15.075 11:31:41 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:27:15.075 11:31:41 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:27:15.075 11:31:41 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:27:15.075 11:31:41 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:27:15.075 11:31:41 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:15.075 11:31:41 keyring_linux -- keyring/linux.sh@33 -- # sn=124161108 00:27:15.075 11:31:41 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 124161108 00:27:15.075 1 links removed 00:27:15.075 11:31:41 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:27:15.075 11:31:41 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:27:15.075 11:31:41 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:27:15.075 11:31:41 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:27:15.075 11:31:41 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:27:15.075 11:31:41 keyring_linux -- keyring/linux.sh@33 -- # sn=445845533 00:27:15.075 11:31:41 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 445845533 00:27:15.075 1 links removed 00:27:15.075 11:31:41 keyring_linux -- keyring/linux.sh@41 -- # killprocess 708647 00:27:15.075 11:31:41 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 708647 ']' 00:27:15.075 11:31:41 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 708647 00:27:15.075 11:31:41 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:27:15.075 11:31:41 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:15.075 11:31:41 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 708647 00:27:15.075 11:31:41 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:15.075 11:31:41 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:15.075 11:31:41 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 708647' 00:27:15.075 killing process with pid 708647 00:27:15.075 11:31:41 keyring_linux -- common/autotest_common.sh@967 -- # kill 708647 00:27:15.075 Received shutdown signal, test time was about 1.000000 seconds 00:27:15.075 00:27:15.075 Latency(us) 00:27:15.075 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:15.075 =================================================================================================================== 00:27:15.075 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:15.075 11:31:41 keyring_linux -- common/autotest_common.sh@972 -- # wait 708647 00:27:15.332 11:31:41 keyring_linux -- keyring/linux.sh@42 -- # killprocess 708515 00:27:15.332 11:31:41 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 708515 ']' 00:27:15.332 11:31:41 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 708515 00:27:15.332 11:31:41 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:27:15.332 11:31:41 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:15.332 11:31:41 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 708515 00:27:15.332 11:31:41 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:15.332 11:31:41 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:15.332 11:31:41 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 708515' 00:27:15.332 killing process with pid 708515 00:27:15.332 11:31:41 keyring_linux -- common/autotest_common.sh@967 -- # kill 708515 00:27:15.332 11:31:41 keyring_linux -- common/autotest_common.sh@972 -- # wait 708515 00:27:15.898 00:27:15.898 real 0m4.945s 00:27:15.898 user 0m9.559s 00:27:15.898 sys 0m1.567s 00:27:15.898 11:31:41 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:15.898 11:31:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:15.898 ************************************ 00:27:15.898 END TEST keyring_linux 00:27:15.898 ************************************ 00:27:15.898 11:31:41 -- common/autotest_common.sh@1142 -- # return 0 00:27:15.898 11:31:41 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:27:15.898 11:31:41 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:27:15.898 11:31:41 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:27:15.898 11:31:41 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:27:15.898 11:31:41 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:27:15.898 11:31:41 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:27:15.898 11:31:41 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:27:15.898 11:31:41 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:27:15.898 11:31:41 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:27:15.898 11:31:41 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:27:15.898 11:31:41 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:27:15.898 11:31:41 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:27:15.898 11:31:41 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:27:15.898 11:31:41 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:27:15.898 11:31:41 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:27:15.898 11:31:41 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:27:15.898 11:31:41 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:27:15.898 11:31:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:15.898 11:31:41 -- common/autotest_common.sh@10 -- # set +x 00:27:15.898 11:31:41 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:27:15.898 11:31:41 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:27:15.898 11:31:41 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:27:15.898 11:31:41 -- common/autotest_common.sh@10 -- # set +x 00:27:17.799 INFO: APP EXITING 00:27:17.799 INFO: killing all VMs 00:27:17.799 INFO: killing vhost app 00:27:17.799 INFO: EXIT DONE 00:27:18.736 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:27:18.736 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:27:18.736 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:27:18.736 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:27:18.736 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:27:18.736 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:27:18.736 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:27:18.736 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:27:18.736 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:27:18.736 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:27:18.994 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:27:18.994 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:27:18.994 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:27:18.994 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:27:18.994 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:27:18.994 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:27:18.994 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:27:20.369 Cleaning 00:27:20.369 Removing: /var/run/dpdk/spdk0/config 00:27:20.369 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:20.369 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:20.369 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:20.369 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:20.369 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:27:20.369 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:27:20.369 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:27:20.369 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:27:20.369 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:20.369 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:20.369 Removing: /var/run/dpdk/spdk1/config 00:27:20.369 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:27:20.369 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:27:20.369 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:27:20.369 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:27:20.369 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:27:20.369 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:27:20.369 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:27:20.369 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:27:20.369 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:27:20.369 Removing: /var/run/dpdk/spdk1/hugepage_info 00:27:20.369 Removing: /var/run/dpdk/spdk1/mp_socket 00:27:20.369 Removing: /var/run/dpdk/spdk2/config 00:27:20.369 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:27:20.369 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:27:20.369 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:27:20.369 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:27:20.369 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:27:20.369 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:27:20.369 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:27:20.369 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:27:20.369 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:27:20.369 Removing: /var/run/dpdk/spdk2/hugepage_info 00:27:20.369 Removing: /var/run/dpdk/spdk3/config 00:27:20.369 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:27:20.369 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:27:20.369 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:27:20.369 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:27:20.369 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:27:20.369 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:27:20.369 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:27:20.369 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:27:20.369 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:27:20.369 Removing: /var/run/dpdk/spdk3/hugepage_info 00:27:20.369 Removing: /var/run/dpdk/spdk4/config 00:27:20.369 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:27:20.369 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:27:20.369 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:27:20.369 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:27:20.369 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:27:20.369 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:27:20.369 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:27:20.369 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:27:20.369 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:27:20.369 Removing: /var/run/dpdk/spdk4/hugepage_info 00:27:20.369 Removing: /dev/shm/bdev_svc_trace.1 00:27:20.369 Removing: /dev/shm/nvmf_trace.0 00:27:20.369 Removing: /dev/shm/spdk_tgt_trace.pid448967 00:27:20.370 Removing: /var/run/dpdk/spdk0 00:27:20.370 Removing: /var/run/dpdk/spdk1 00:27:20.370 Removing: /var/run/dpdk/spdk2 00:27:20.370 Removing: /var/run/dpdk/spdk3 00:27:20.370 Removing: /var/run/dpdk/spdk4 00:27:20.370 Removing: /var/run/dpdk/spdk_pid447298 00:27:20.370 Removing: /var/run/dpdk/spdk_pid448037 00:27:20.370 Removing: /var/run/dpdk/spdk_pid448967 00:27:20.370 Removing: /var/run/dpdk/spdk_pid449405 00:27:20.370 Removing: /var/run/dpdk/spdk_pid450095 00:27:20.370 Removing: /var/run/dpdk/spdk_pid450191 00:27:20.370 Removing: /var/run/dpdk/spdk_pid450847 00:27:20.370 Removing: /var/run/dpdk/spdk_pid450965 00:27:20.370 Removing: /var/run/dpdk/spdk_pid451207 00:27:20.370 Removing: /var/run/dpdk/spdk_pid452404 00:27:20.370 Removing: /var/run/dpdk/spdk_pid453318 00:27:20.370 Removing: /var/run/dpdk/spdk_pid453502 00:27:20.370 Removing: /var/run/dpdk/spdk_pid453691 00:27:20.370 Removing: /var/run/dpdk/spdk_pid453979 00:27:20.370 Removing: /var/run/dpdk/spdk_pid454203 00:27:20.370 Removing: /var/run/dpdk/spdk_pid454363 00:27:20.370 Removing: /var/run/dpdk/spdk_pid454523 00:27:20.370 Removing: /var/run/dpdk/spdk_pid454701 00:27:20.370 Removing: /var/run/dpdk/spdk_pid455017 00:27:20.370 Removing: /var/run/dpdk/spdk_pid457361 00:27:20.370 Removing: /var/run/dpdk/spdk_pid457532 00:27:20.370 Removing: /var/run/dpdk/spdk_pid457726 00:27:20.370 Removing: /var/run/dpdk/spdk_pid457753 00:27:20.370 Removing: /var/run/dpdk/spdk_pid458238 00:27:20.370 Removing: /var/run/dpdk/spdk_pid458250 00:27:20.370 Removing: /var/run/dpdk/spdk_pid458695 00:27:20.370 Removing: /var/run/dpdk/spdk_pid458928 00:27:20.370 Removing: /var/run/dpdk/spdk_pid459479 00:27:20.370 Removing: /var/run/dpdk/spdk_pid459490 00:27:20.370 Removing: /var/run/dpdk/spdk_pid459655 00:27:20.370 Removing: /var/run/dpdk/spdk_pid459786 00:27:20.370 Removing: /var/run/dpdk/spdk_pid460149 00:27:20.370 Removing: /var/run/dpdk/spdk_pid460312 00:27:20.370 Removing: /var/run/dpdk/spdk_pid460621 00:27:20.370 Removing: /var/run/dpdk/spdk_pid460779 00:27:20.370 Removing: /var/run/dpdk/spdk_pid460823 00:27:20.370 Removing: /var/run/dpdk/spdk_pid460958 00:27:20.370 Removing: /var/run/dpdk/spdk_pid461164 00:27:20.370 Removing: /var/run/dpdk/spdk_pid461318 00:27:20.370 Removing: /var/run/dpdk/spdk_pid461505 00:27:20.370 Removing: /var/run/dpdk/spdk_pid461753 00:27:20.370 Removing: /var/run/dpdk/spdk_pid461911 00:27:20.370 Removing: /var/run/dpdk/spdk_pid462064 00:27:20.370 Removing: /var/run/dpdk/spdk_pid462341 00:27:20.370 Removing: /var/run/dpdk/spdk_pid462501 00:27:20.370 Removing: /var/run/dpdk/spdk_pid462658 00:27:20.370 Removing: /var/run/dpdk/spdk_pid462931 00:27:20.370 Removing: /var/run/dpdk/spdk_pid463087 00:27:20.370 Removing: /var/run/dpdk/spdk_pid463246 00:27:20.370 Removing: /var/run/dpdk/spdk_pid463515 00:27:20.370 Removing: /var/run/dpdk/spdk_pid463681 00:27:20.370 Removing: /var/run/dpdk/spdk_pid463833 00:27:20.370 Removing: /var/run/dpdk/spdk_pid464072 00:27:20.370 Removing: /var/run/dpdk/spdk_pid464266 00:27:20.370 Removing: /var/run/dpdk/spdk_pid464433 00:27:20.370 Removing: /var/run/dpdk/spdk_pid464599 00:27:20.370 Removing: /var/run/dpdk/spdk_pid464864 00:27:20.370 Removing: /var/run/dpdk/spdk_pid464930 00:27:20.370 Removing: /var/run/dpdk/spdk_pid465140 00:27:20.370 Removing: /var/run/dpdk/spdk_pid467319 00:27:20.370 Removing: /var/run/dpdk/spdk_pid493362 00:27:20.370 Removing: /var/run/dpdk/spdk_pid496098 00:27:20.370 Removing: /var/run/dpdk/spdk_pid503467 00:27:20.370 Removing: /var/run/dpdk/spdk_pid506735 00:27:20.370 Removing: /var/run/dpdk/spdk_pid509074 00:27:20.370 Removing: /var/run/dpdk/spdk_pid509485 00:27:20.370 Removing: /var/run/dpdk/spdk_pid513455 00:27:20.370 Removing: /var/run/dpdk/spdk_pid517298 00:27:20.370 Removing: /var/run/dpdk/spdk_pid517301 00:27:20.370 Removing: /var/run/dpdk/spdk_pid517838 00:27:20.370 Removing: /var/run/dpdk/spdk_pid518497 00:27:20.370 Removing: /var/run/dpdk/spdk_pid519155 00:27:20.370 Removing: /var/run/dpdk/spdk_pid519555 00:27:20.370 Removing: /var/run/dpdk/spdk_pid519560 00:27:20.370 Removing: /var/run/dpdk/spdk_pid519700 00:27:20.370 Removing: /var/run/dpdk/spdk_pid519839 00:27:20.370 Removing: /var/run/dpdk/spdk_pid519841 00:27:20.370 Removing: /var/run/dpdk/spdk_pid520498 00:27:20.370 Removing: /var/run/dpdk/spdk_pid521149 00:27:20.370 Removing: /var/run/dpdk/spdk_pid521699 00:27:20.370 Removing: /var/run/dpdk/spdk_pid522094 00:27:20.370 Removing: /var/run/dpdk/spdk_pid522215 00:27:20.370 Removing: /var/run/dpdk/spdk_pid522363 00:27:20.370 Removing: /var/run/dpdk/spdk_pid523252 00:27:20.370 Removing: /var/run/dpdk/spdk_pid523967 00:27:20.370 Removing: /var/run/dpdk/spdk_pid529950 00:27:20.370 Removing: /var/run/dpdk/spdk_pid530229 00:27:20.370 Removing: /var/run/dpdk/spdk_pid532733 00:27:20.370 Removing: /var/run/dpdk/spdk_pid536436 00:27:20.370 Removing: /var/run/dpdk/spdk_pid538602 00:27:20.370 Removing: /var/run/dpdk/spdk_pid544888 00:27:20.628 Removing: /var/run/dpdk/spdk_pid550094 00:27:20.628 Removing: /var/run/dpdk/spdk_pid551403 00:27:20.628 Removing: /var/run/dpdk/spdk_pid552071 00:27:20.628 Removing: /var/run/dpdk/spdk_pid562258 00:27:20.628 Removing: /var/run/dpdk/spdk_pid564968 00:27:20.628 Removing: /var/run/dpdk/spdk_pid589651 00:27:20.628 Removing: /var/run/dpdk/spdk_pid592435 00:27:20.628 Removing: /var/run/dpdk/spdk_pid593624 00:27:20.628 Removing: /var/run/dpdk/spdk_pid594937 00:27:20.628 Removing: /var/run/dpdk/spdk_pid594951 00:27:20.628 Removing: /var/run/dpdk/spdk_pid595093 00:27:20.628 Removing: /var/run/dpdk/spdk_pid595233 00:27:20.628 Removing: /var/run/dpdk/spdk_pid595662 00:27:20.628 Removing: /var/run/dpdk/spdk_pid596977 00:27:20.628 Removing: /var/run/dpdk/spdk_pid597591 00:27:20.628 Removing: /var/run/dpdk/spdk_pid598010 00:27:20.628 Removing: /var/run/dpdk/spdk_pid599626 00:27:20.628 Removing: /var/run/dpdk/spdk_pid599935 00:27:20.628 Removing: /var/run/dpdk/spdk_pid600492 00:27:20.628 Removing: /var/run/dpdk/spdk_pid603009 00:27:20.628 Removing: /var/run/dpdk/spdk_pid608914 00:27:20.628 Removing: /var/run/dpdk/spdk_pid611683 00:27:20.628 Removing: /var/run/dpdk/spdk_pid615341 00:27:20.628 Removing: /var/run/dpdk/spdk_pid616386 00:27:20.628 Removing: /var/run/dpdk/spdk_pid617397 00:27:20.628 Removing: /var/run/dpdk/spdk_pid620507 00:27:20.628 Removing: /var/run/dpdk/spdk_pid622859 00:27:20.628 Removing: /var/run/dpdk/spdk_pid626958 00:27:20.628 Removing: /var/run/dpdk/spdk_pid627083 00:27:20.628 Removing: /var/run/dpdk/spdk_pid629926 00:27:20.628 Removing: /var/run/dpdk/spdk_pid630110 00:27:20.628 Removing: /var/run/dpdk/spdk_pid630242 00:27:20.628 Removing: /var/run/dpdk/spdk_pid630517 00:27:20.628 Removing: /var/run/dpdk/spdk_pid630525 00:27:20.628 Removing: /var/run/dpdk/spdk_pid637042 00:27:20.628 Removing: /var/run/dpdk/spdk_pid637465 00:27:20.628 Removing: /var/run/dpdk/spdk_pid640080 00:27:20.628 Removing: /var/run/dpdk/spdk_pid641866 00:27:20.628 Removing: /var/run/dpdk/spdk_pid645207 00:27:20.628 Removing: /var/run/dpdk/spdk_pid648460 00:27:20.628 Removing: /var/run/dpdk/spdk_pid654595 00:27:20.628 Removing: /var/run/dpdk/spdk_pid658899 00:27:20.628 Removing: /var/run/dpdk/spdk_pid658902 00:27:20.628 Removing: /var/run/dpdk/spdk_pid671200 00:27:20.628 Removing: /var/run/dpdk/spdk_pid671597 00:27:20.628 Removing: /var/run/dpdk/spdk_pid672015 00:27:20.628 Removing: /var/run/dpdk/spdk_pid672496 00:27:20.628 Removing: /var/run/dpdk/spdk_pid673053 00:27:20.628 Removing: /var/run/dpdk/spdk_pid673448 00:27:20.628 Removing: /var/run/dpdk/spdk_pid673837 00:27:20.628 Removing: /var/run/dpdk/spdk_pid674234 00:27:20.628 Removing: /var/run/dpdk/spdk_pid676691 00:27:20.628 Removing: /var/run/dpdk/spdk_pid676940 00:27:20.628 Removing: /var/run/dpdk/spdk_pid680642 00:27:20.628 Removing: /var/run/dpdk/spdk_pid680693 00:27:20.628 Removing: /var/run/dpdk/spdk_pid682359 00:27:20.628 Removing: /var/run/dpdk/spdk_pid687154 00:27:20.628 Removing: /var/run/dpdk/spdk_pid687159 00:27:20.628 Removing: /var/run/dpdk/spdk_pid690013 00:27:20.628 Removing: /var/run/dpdk/spdk_pid691365 00:27:20.628 Removing: /var/run/dpdk/spdk_pid692822 00:27:20.628 Removing: /var/run/dpdk/spdk_pid694163 00:27:20.628 Removing: /var/run/dpdk/spdk_pid695454 00:27:20.628 Removing: /var/run/dpdk/spdk_pid696250 00:27:20.628 Removing: /var/run/dpdk/spdk_pid701454 00:27:20.628 Removing: /var/run/dpdk/spdk_pid701790 00:27:20.629 Removing: /var/run/dpdk/spdk_pid702169 00:27:20.629 Removing: /var/run/dpdk/spdk_pid703688 00:27:20.629 Removing: /var/run/dpdk/spdk_pid704007 00:27:20.629 Removing: /var/run/dpdk/spdk_pid704338 00:27:20.629 Removing: /var/run/dpdk/spdk_pid706735 00:27:20.629 Removing: /var/run/dpdk/spdk_pid706746 00:27:20.629 Removing: /var/run/dpdk/spdk_pid708165 00:27:20.629 Removing: /var/run/dpdk/spdk_pid708515 00:27:20.629 Removing: /var/run/dpdk/spdk_pid708647 00:27:20.629 Clean 00:27:20.629 11:31:46 -- common/autotest_common.sh@1451 -- # return 0 00:27:20.629 11:31:46 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:27:20.629 11:31:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:20.629 11:31:46 -- common/autotest_common.sh@10 -- # set +x 00:27:20.629 11:31:46 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:27:20.629 11:31:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:20.629 11:31:46 -- common/autotest_common.sh@10 -- # set +x 00:27:20.629 11:31:46 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:27:20.629 11:31:46 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:27:20.629 11:31:46 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:27:20.629 11:31:46 -- spdk/autotest.sh@391 -- # hash lcov 00:27:20.629 11:31:46 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:27:20.629 11:31:46 -- spdk/autotest.sh@393 -- # hostname 00:27:20.629 11:31:46 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:27:20.887 geninfo: WARNING: invalid characters removed from testname! 00:27:52.981 11:32:13 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:27:52.981 11:32:17 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:27:54.905 11:32:20 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:27:58.199 11:32:23 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:00.738 11:32:26 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:04.025 11:32:29 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:06.563 11:32:32 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:06.563 11:32:32 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:06.563 11:32:32 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:06.563 11:32:32 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:06.563 11:32:32 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:06.563 11:32:32 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.563 11:32:32 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.563 11:32:32 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.563 11:32:32 -- paths/export.sh@5 -- $ export PATH 00:28:06.563 11:32:32 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.563 11:32:32 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:28:06.563 11:32:32 -- common/autobuild_common.sh@444 -- $ date +%s 00:28:06.563 11:32:32 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720776752.XXXXXX 00:28:06.563 11:32:32 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720776752.dB5ixy 00:28:06.563 11:32:32 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:28:06.563 11:32:32 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:28:06.563 11:32:32 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:28:06.563 11:32:32 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:28:06.563 11:32:32 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:28:06.563 11:32:32 -- common/autobuild_common.sh@460 -- $ get_config_params 00:28:06.563 11:32:32 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:28:06.563 11:32:32 -- common/autotest_common.sh@10 -- $ set +x 00:28:06.563 11:32:32 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:28:06.563 11:32:32 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:28:06.563 11:32:32 -- pm/common@17 -- $ local monitor 00:28:06.563 11:32:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:06.563 11:32:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:06.563 11:32:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:06.563 11:32:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:06.563 11:32:32 -- pm/common@21 -- $ date +%s 00:28:06.563 11:32:32 -- pm/common@25 -- $ sleep 1 00:28:06.563 11:32:32 -- pm/common@21 -- $ date +%s 00:28:06.563 11:32:32 -- pm/common@21 -- $ date +%s 00:28:06.563 11:32:32 -- pm/common@21 -- $ date +%s 00:28:06.563 11:32:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720776752 00:28:06.563 11:32:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720776752 00:28:06.563 11:32:32 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720776752 00:28:06.563 11:32:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720776752 00:28:06.563 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720776752_collect-vmstat.pm.log 00:28:06.563 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720776752_collect-cpu-load.pm.log 00:28:06.563 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720776752_collect-cpu-temp.pm.log 00:28:06.563 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720776752_collect-bmc-pm.bmc.pm.log 00:28:07.495 11:32:33 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:28:07.495 11:32:33 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:28:07.495 11:32:33 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:07.495 11:32:33 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:07.495 11:32:33 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:07.495 11:32:33 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:07.495 11:32:33 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:07.495 11:32:33 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:07.495 11:32:33 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:28:07.754 11:32:33 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:07.754 11:32:33 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:28:07.754 11:32:33 -- pm/common@29 -- $ signal_monitor_resources TERM 00:28:07.754 11:32:33 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:28:07.754 11:32:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:07.754 11:32:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:28:07.754 11:32:33 -- pm/common@44 -- $ pid=712868 00:28:07.754 11:32:33 -- pm/common@50 -- $ kill -TERM 712868 00:28:07.754 11:32:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:07.754 11:32:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:28:07.754 11:32:33 -- pm/common@44 -- $ pid=712870 00:28:07.754 11:32:33 -- pm/common@50 -- $ kill -TERM 712870 00:28:07.754 11:32:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:07.754 11:32:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:28:07.754 11:32:33 -- pm/common@44 -- $ pid=712871 00:28:07.754 11:32:33 -- pm/common@50 -- $ kill -TERM 712871 00:28:07.754 11:32:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:07.754 11:32:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:28:07.754 11:32:33 -- pm/common@44 -- $ pid=712900 00:28:07.754 11:32:33 -- pm/common@50 -- $ sudo -E kill -TERM 712900 00:28:07.754 + [[ -n 363487 ]] 00:28:07.754 + sudo kill 363487 00:28:07.764 [Pipeline] } 00:28:07.781 [Pipeline] // stage 00:28:07.786 [Pipeline] } 00:28:07.806 [Pipeline] // timeout 00:28:07.811 [Pipeline] } 00:28:07.833 [Pipeline] // catchError 00:28:07.839 [Pipeline] } 00:28:07.861 [Pipeline] // wrap 00:28:07.867 [Pipeline] } 00:28:07.887 [Pipeline] // catchError 00:28:07.896 [Pipeline] stage 00:28:07.897 [Pipeline] { (Epilogue) 00:28:07.913 [Pipeline] catchError 00:28:07.915 [Pipeline] { 00:28:07.932 [Pipeline] echo 00:28:07.934 Cleanup processes 00:28:07.942 [Pipeline] sh 00:28:08.229 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:08.230 713006 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:28:08.230 713133 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:08.248 [Pipeline] sh 00:28:08.537 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:08.537 ++ grep -v 'sudo pgrep' 00:28:08.537 ++ awk '{print $1}' 00:28:08.537 + sudo kill -9 713006 00:28:08.549 [Pipeline] sh 00:28:08.835 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:16.977 [Pipeline] sh 00:28:17.319 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:17.319 Artifacts sizes are good 00:28:17.334 [Pipeline] archiveArtifacts 00:28:17.341 Archiving artifacts 00:28:17.560 [Pipeline] sh 00:28:17.845 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:28:17.861 [Pipeline] cleanWs 00:28:17.872 [WS-CLEANUP] Deleting project workspace... 00:28:17.872 [WS-CLEANUP] Deferred wipeout is used... 00:28:17.879 [WS-CLEANUP] done 00:28:17.881 [Pipeline] } 00:28:17.904 [Pipeline] // catchError 00:28:17.917 [Pipeline] sh 00:28:18.197 + logger -p user.info -t JENKINS-CI 00:28:18.206 [Pipeline] } 00:28:18.223 [Pipeline] // stage 00:28:18.230 [Pipeline] } 00:28:18.247 [Pipeline] // node 00:28:18.253 [Pipeline] End of Pipeline 00:28:18.290 Finished: SUCCESS